Science.gov

Sample records for wavelet-based density estimation

  1. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  2. Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation

    NASA Astrophysics Data System (ADS)

    Lei, Sheau-Fang; Tung, Ying-Kai

    Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.

  3. Estimation of Modal Parameters Using a Wavelet-Based Approach

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Haley, Sidney M.

    1997-01-01

    Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

  4. Measuring mass density and ultrasonic wave velocity: A wavelet-based method applied in ultrasonic reflection mode.

    PubMed

    Metwally, Khaled; Lefevre, Emmanuelle; Baron, Cécile; Zheng, Rui; Pithioux, Martine; Lasaygues, Philippe

    2016-02-01

    When assessing ultrasonic measurements of material parameters, the signal processing is an important part of the inverse problem. Measurements of thickness, ultrasonic wave velocity and mass density are required for such assessments. This study investigates the feasibility and the robustness of a wavelet-based processing (WBP) method based on a Jaffard-Meyer algorithm for calculating these parameters simultaneously and independently, using one single ultrasonic signal in the reflection mode. The appropriate transmitted incident wave, correlated with the mathematical properties of the wavelet decomposition, was determined using a adapted identification procedure to build a mathematically equivalent model for the electro-acoustic system. The method was tested on three groups of samples (polyurethane resin, bone and wood) using one 1-MHz transducer. For thickness and velocity measurements, the WBP method gave a relative error lower than 1.5%. The relative errors in the mass density measurements ranged between 0.70% and 2.59%. Despite discrepancies between manufactured and biological samples, the results obtained on the three groups of samples using the WBP method in the reflection mode were remarkably consistent, indicating that it is a reliable and efficient means of simultaneously assessing the thickness and the velocity of the ultrasonic wave propagating in the medium, and the apparent mass density of material. PMID:26403278

  5. Fetal QRS detection and heart rate estimation: a wavelet-based approach.

    PubMed

    Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula

    2014-08-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. PMID:25070210

  6. Real-time wavelet based blur estimation on cell BE platform

    NASA Astrophysics Data System (ADS)

    Lukic, Nemanja; Platiša, Ljiljana; Pižurica, Aleksandra; Philips, Wilfried; Temerinac, Miodrag

    2010-01-01

    We propose a real-time system for blur estimation using wavelet decomposition. The system is based on an emerging multi-core microprocessor architecture (Cell Broadband Engine, Cell BE) known to outperform any available general purpose or DSP processor in the domain of real-time advanced video processing solutions. We start from a recent wavelet domain blur estimation algorithm which uses histograms of a local regularity measure called average cone ratio (ACR). This approach has shown a very good potential for assessing the level of blur in the image yet some important aspects remain to be addressed in order for the method to become a practically working one. Some of these aspects are explored in our work. Furthermore, we develop an efficient real-time implementation of the novelty metric and integrate it into a system that captures live video. The proposed system estimates blur extent and renders the results to the remote user in real-time.

  7. Direct Density Derivative Estimation.

    PubMed

    Sasaki, Hiroaki; Noh, Yung-Kyun; Niu, Gang; Sugiyama, Masashi

    2016-06-01

    Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored. PMID:27140943

  8. Information geometric density estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Marchand-Maillet, Stéphane

    2015-01-01

    We investigate kernel density estimation where the kernel function varies from point to point. Density estimation in the input space means to find a set of coordinates on a statistical manifold. This novel perspective helps to combine efforts from information geometry and machine learning to spawn a family of density estimators. We present example models with simulations. We discuss the principle and theory of such density estimation.

  9. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  10. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  11. Maximum Likelihood Wavelet Density Estimation With Applications to Image and Shape Matching

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Density estimation for observational data plays an integral role in a broad spectrum of applications, e.g., statistical data analysis and information-theoretic image registration. Of late, wavelet-based density estimators have gained in popularity due to their ability to approximate a large class of functions, adapting well to difficult situations such as when densities exhibit abrupt changes. The decision to work with wavelet density estimators brings along with it theoretical considerations (e.g., non-negativity, integrability) and empirical issues (e.g., computation of basis coefficients) that must be addressed in order to obtain a bona fide density. In this paper, we present a new method to accurately estimate a non-negative density which directly addresses many of the problems in practical wavelet density estimation. We cast the estimation procedure in a maximum likelihood framework which estimates the square root of the density p, allowing us to obtain the natural non-negative density representation (p)2. Analysis of this method will bring to light a remarkable theoretical connection with the Fisher information of the density and, consequently, lead to an efficient constrained optimization procedure to estimate the wavelet coefficients. We illustrate the effectiveness of the algorithm by evaluating its performance on mutual information-based image registration, shape point set alignment, and empirical comparisons to known densities. The present method is also compared to fixed and variable bandwidth kernel density estimators. PMID:18390355

  12. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  13. A family of orthonormal wavelet bases with dilation factor 4

    NASA Astrophysics Data System (ADS)

    Karoui, Abderrazek

    2006-05-01

    In this paper, we study a method for the construction of orthonormal wavelet bases with dilation factor 4. More precisely, for any integer M>0, we construct an orthonormal scaling filter mM([xi]) that generates a mother scaling function [phi]M, associated with the dilation factor 4. The computation of the different coefficients of mM([xi])2 is done by the use of a simple iterative method. Also, this work shows how this construction method provides us with a whole family of compactly supported orthonormal wavelet bases with arbitrary high regularity. A first estimate of [alpha](M), the asymptotic regularity of [phi]M is given by [alpha](M)~0.25M. Examples are provided to illustrate the results of this work.

  14. Estimation of coastal density gradients

    NASA Astrophysics Data System (ADS)

    Howarth, M. J.; Palmer, M. R.; Polton, J. A.; O'Neill, C. K.

    2012-04-01

    Density gradients in coastal regions with significant freshwater input are large and variable and are a major control of nearshore circulation. However their measurement is difficult, especially where the gradients are largest close to the coast, with significant uncertainties because of a variety of factors - spatial and time scales are small, tidal currents are strong and water depths shallow. Whilst temperature measurements are relatively straightforward, measurements of salinity (the dominant control of spatial variability) can be less reliable in turbid coastal waters. Liverpool Bay has strong tidal mixing and receives fresh water principally from the Dee, Mersey, Ribble and Conwy estuaries, each with different catchment influences. Horizontal and vertical density gradients are variable both in space and time. The water column stratifies intermittently. A Coastal Observatory has been operational since 2002 with regular (quasi monthly) CTD surveys on a 9 km grid, an situ station, an instrumented ferry travelling between Birkenhead and Dublin and a shore-based HF radar system measuring surface currents and waves. These measurements are complementary, each having different space-time characteristics. For coastal gradients the ferry is particularly useful since measurements are made right from the mouth of Mersey. From measurements at the in situ site alone density gradients can only be estimated from the tidal excursion. A suite of coupled physical, wave and ecological models are run in association with these measurements. The models, here on a 1.8 km grid, enable detailed estimation of nearshore density gradients, provided appropriate river run-off data are available. Examples are presented of the density gradients estimated from the different measurements and models, together with accuracies and uncertainties, showing that systematic time series measurements within a few kilometres of the coast are a high priority. (Here gliders are an exciting prospect for

  15. Wavelet based recognition for pulsar signals

    NASA Astrophysics Data System (ADS)

    Shan, H.; Wang, X.; Chen, X.; Yuan, J.; Nie, J.; Zhang, H.; Liu, N.; Wang, N.

    2015-06-01

    A signal from a pulsar can be decomposed into a set of features. This set is a unique signature for a given pulsar. It can be used to decide whether a pulsar is newly discovered or not. Features can be constructed from coefficients of a wavelet decomposition. Two types of wavelet based pulsar features are proposed. The energy based features reflect the multiscale distribution of the energy of coefficients. The singularity based features first classify the signals into a class with one peak and a class with two peaks by exploring the number of the straight wavelet modulus maxima lines perpendicular to the abscissa, and then implement further classification according to the features of skewness and kurtosis. Experimental results show that the wavelet based features can gain comparatively better performance over the shape parameter based features not only in the clustering and classification, but also in the error rates of the recognition tasks.

  16. Wavelets based on Hermite cubic splines

    NASA Astrophysics Data System (ADS)

    Cvejnová, Daniela; Černá, Dana; Finěk, Václav

    2016-06-01

    In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.

  17. Image denoising via Bayesian estimation of local variance with Maxwell density prior

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-10-01

    The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.

  18. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  19. Wavelet-based ultrasound image denoising: performance analysis and comparison.

    PubMed

    Rizi, F Yousefi; Noubari, H Ahmadi; Setarehdan, S K

    2011-01-01

    Ultrasound images are generally affected by multiplicative speckle noise, which is mainly due to the coherent nature of the scattering phenomenon. Speckle noise filtering is thus a critical pre-processing step in medical ultrasound imaging provided that the diagnostic features of interest are not lost. A comparative study of the performance of alternative wavelet based ultrasound image denoising methods is presented in this article. In particular, the contourlet and curvelet techniques with dual tree complex and real and double density wavelet transform denoising methods were applied to real ultrasound images and results were quantitatively compared. The results show that curvelet-based method performs superior as compared to other methods and can effectively reduce most of the speckle noise content of a given image. PMID:22255196

  20. Hydrologic regionalization using wavelet-based multiscale entropy method

    NASA Astrophysics Data System (ADS)

    Agarwal, A.; Maheswaran, R.; Sehgal, V.; Khosa, R.; Sivakumar, B.; Bernhofer, C.

    2016-07-01

    Catchment regionalization is an important step in estimating hydrologic parameters of ungaged basins. This paper proposes a multiscale entropy method using wavelet transform and k-means based hybrid approach for clustering of hydrologic catchments. Multi-resolution wavelet transform of a time series reveals structure, which is often obscured in streamflow records, by permitting gross and fine features of a signal to be separated. Wavelet-based Multiscale Entropy (WME) is a measure of randomness of the given time series at different timescales. In this study, streamflow records observed during 1951-2002 at 530 selected catchments throughout the United States are used to test the proposed regionalization framework. Further, based on the pattern of entropy across multiple scales, each cluster is given an entropy signature that provides an approximation of the entropy pattern of the streamflow data in each cluster. The tests for homogeneity reveals that the proposed approach works very well in regionalization.

  1. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  2. Wavelet-based approach to character skeleton.

    PubMed

    You, Xinge; Tang, Yuan Yan

    2007-05-01

    Character skeleton plays a significant role in character recognition. The strokes of a character may consist of two regions, i.e., singular and regular regions. The intersections and junctions of the strokes belong to singular region, while the straight and smooth parts of the strokes are categorized to regular region. Therefore, a skeletonization method requires two different processes to treat the skeletons in theses two different regions. All traditional skeletonization algorithms are based on the symmetry analysis technique. The major problems of these methods are as follows. 1) The computation of the primary skeleton in the regular region is indirect, so that its implementation is sophisticated and costly. 2) The extracted skeleton cannot be exactly located on the central line of the stroke. 3) The captured skeleton in the singular region may be distorted by artifacts and branches. To overcome these problems, a novel scheme of extracting the skeleton of character based on wavelet transform is presented in this paper. This scheme consists of two main steps, namely: a) extraction of primary skeleton in the regular region and b) amendment processing of the primary skeletons and connection of them in the singular region. A direct technique is used in the first step, where a new wavelet-based symmetry analysis is developed for finding the central line of the stroke directly. A novel method called smooth interpolation is designed in the second step, where a smooth operation is applied to the primary skeleton, and, thereafter, the interpolation compensation technique is proposed to link the primary skeleton, so that the skeleton in the singular region can be produced. Experiments are conducted and positive results are achieved, which show that the proposed skeletonization scheme is applicable to not only binary image but also gray-level image, and the skeleton is robust against noise and affine transform. PMID:17491454

  3. Wavelet-based analysis of circadian behavioral rhythms.

    PubMed

    Leise, Tanya L

    2015-01-01

    The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

  4. ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS

    EPA Science Inventory

    An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...

  5. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation. PMID:19897106

  6. A wavelet-based baseline drift correction method for grounded electrical source airborne transient electromagnetic signals

    NASA Astrophysics Data System (ADS)

    Wang, Yuan 1Ji, Yanju 2Li, Suyi 13Lin, Jun 12Zhou, Fengdao 1Yang, Guihong

    2013-09-01

    A grounded electrical source airborne transient electromagnetic (GREATEM) system on an airship enjoys high depth of prospecting and spatial resolution, as well as outstanding detection efficiency and easy flight control. However, the movement and swing of the front-fixed receiving coil can cause severe baseline drift, leading to inferior resistivity image formation. Consequently, the reduction of baseline drift of GREATEM is of vital importance to inversion explanation. To correct the baseline drift, a traditional interpolation method estimates the baseline `envelope' using the linear interpolation between the calculated start and end points of all cycles, and obtains the corrected signal by subtracting the envelope from the original signal. However, the effectiveness and efficiency of the removal is found to be low. Considering the characteristics of the baseline drift in GREATEM data, this study proposes a wavelet-based method based on multi-resolution analysis. The optimal wavelet basis and decomposition levels are determined through the iterative comparison of trial and error. This application uses the sym8 wavelet with 10 decomposition levels, and obtains the approximation at level-10 as the baseline drift, then gets the corrected signal by removing the estimated baseline drift from the original signal. To examine the performance of our proposed method, we establish a dipping sheet model and calculate the theoretical response. Through simulations, we compare the signal-to-noise ratio, signal distortion, and processing speed of the wavelet-based method and those of the interpolation method. Simulation results show that the wavelet-based method outperforms the interpolation method. We also use field data to evaluate the methods, compare the depth section images of apparent resistivity using the original signal, the interpolation-corrected signal and the wavelet-corrected signal, respectively. The results confirm that our proposed wavelet-based method is an

  7. Estimating animal population density using passive acoustics

    PubMed Central

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-01-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds

  8. Estimating animal population density using passive acoustics.

    PubMed

    Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L

    2013-05-01

    Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds

  9. Density Estimation for Projected Exoplanet Quantities

    NASA Astrophysics Data System (ADS)

    Brown, Robert A.

    2011-05-01

    Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of "projected" mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth σ. We calibrate the dependence of σ on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n -log 2. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements—particularly on the minimum sample size n—for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.

  10. Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints

    SciTech Connect

    I. W. Ginsberg

    1999-09-01

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.

  11. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  12. Density Estimation Framework for Model Error Assessment

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Liu, Z.; Najm, H. N.; Safta, C.; VanBloemenWaanders, B.; Michelsen, H. A.; Bambha, R.

    2014-12-01

    In this work we highlight the importance of model error assessment in physical model calibration studies. Conventional calibration methods often assume the model is perfect and account for data noise only. Consequently, the estimated parameters typically have biased values that implicitly compensate for model deficiencies. Moreover, improving the amount and the quality of data may not improve the parameter estimates since the model discrepancy is not accounted for. In state-of-the-art methods model discrepancy is explicitly accounted for by enhancing the physical model with a synthetic statistical additive term, which allows appropriate parameter estimates. However, these statistical additive terms do not increase the predictive capability of the model because they are tuned for particular output observables and may even violate physical constraints. We introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving meaningful predictions that are both consistent with the data spread and appropriately disambiguate model and data errors. Here we cast model parameters as random variables, embedding the calibration problem within a density estimation framework. Further, we calibrate for the parameters of the joint input density. The likelihood function for the associated inverse problem is degenerate, therefore we use Approximate Bayesian Computation (ABC) to build prediction-constraining likelihoods and illustrate the strengths of the method on synthetic cases. We also apply the ABC-enhanced density estimation to the TransCom 3 CO2 intercomparison study (Gurney, K. R., et al., Tellus, 55B, pp. 555-579, 2003) and calibrate 15 transport models for regional carbon sources and sinks given atmospheric CO2 concentration measurements.

  13. Directional wavelet based features for colonic polyp classification.

    PubMed

    Wimmer, Georg; Tamaki, Toru; Tischendorf, J J W; Häfner, Michael; Yoshida, Shigeto; Tanaka, Shinji; Uhl, Andreas

    2016-07-01

    In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state

  14. Bird population density estimated from acoustic signals

    USGS Publications Warehouse

    Dawson, D.K.; Efford, M.G.

    2009-01-01

    Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant

  15. Estimation of large-scale dimension densities.

    PubMed

    Raab, C; Kurths, J

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor. PMID:11461376

  16. Estimation of large-scale dimension densities

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Kurths, Jürgen

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor.

  17. Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation.

    PubMed

    Yamane, Ikko; Sasaki, Hiroaki; Sugiyama, Masashi

    2016-07-01

    Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering. PMID:27171983

  18. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  19. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  20. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  1. Density Estimations in Laboratory Debris Flow Experiments

    NASA Astrophysics Data System (ADS)

    Queiroz de Oliveira, Gustavo; Kulisch, Helmut; Malcherek, Andreas; Fischer, Jan-Thomas; Pudasaini, Shiva P.

    2016-04-01

    Bulk density and its variation is an important physical quantity to estimate the solid-liquid fractions in two-phase debris flows. Here we present mass and flow depth measurements for experiments performed in a large-scale laboratory set up. Once the mixture is released and it moves down the inclined channel, measurements allow us to determine the bulk density evolution throughout the debris flow. Flow depths are determined by ultrasonic pulse reflection, and the mass is measured with a total normal force sensor. The data were obtained at 50 Hz. The initial two phase material was composed of 350 kg debris with water content of 40%. A very fine pebble with mean particle diameter of 3 mm, particle density of 2760 kg/m³ and bulk density of 1400 kg/m³ in dry condition was chosen as the solid material. Measurements reveal that the debris bulk density remains high from the head to the middle of the debris body whereas it drops substantially at the tail. This indicates lower water content at the tail, compared to the head and the middle portion of the debris body. This means that the solid and fluid fractions are varying strongly in a non-linear manner along the flow path, and from the head to the tail of the debris mass. Importantly, this spatial-temporal density variation plays a crucial role in determining the impact forces associated with the dynamics of the flow. Our setup allows for investigating different two phase material compositions, including large fluid fractions, with high resolutions. The considered experimental set up may enable us to transfer the observed phenomena to natural large-scale events. Furthermore, the measurement data allows evaluating results of numerical two-phase mass flow simulations. These experiments are parts of the project avaflow.org that intends to develop a GIS-based open source computational tool to describe wide spectrum of rapid geophysical mass flows, including avalanches and real two-phase debris flows down complex natural

  2. Wavelet-based verification of the quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  3. Kernel density estimation using graphical processing unit

    NASA Astrophysics Data System (ADS)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  4. Review of some results in bivariate density estimation

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1982-01-01

    Results are reviewed for choosing smoothing parameters for some bivariate density estimators. Experience gained in comparing the effects of smoothing parameters on probability density estimators for univariate and bivariate data is summarized.

  5. ESTIMATING MICROORGANISM DENSITIES IN AEROSOLS FROM SPRAY IRRIGATION OF WASTEWATER

    EPA Science Inventory

    This document summarizes current knowledge about estimating the density of microorganisms in the air near wastewater management facilities, with emphasis on spray irrigation sites. One technique for modeling microorganism density in air is provided and an aerosol density estimati...

  6. Traffic characterization and modeling of wavelet-based VBR encoded video

    SciTech Connect

    Yu Kuo; Jabbari, B.; Zafar, S.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  7. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  8. Wavelet based hierarchical coding scheme for radar image compression

    NASA Astrophysics Data System (ADS)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  9. Characterizing cerebrovascular dynamics with the wavelet-based multifractal formalism

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Sindeev, S. S.; Pavlova, O. N.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.

    2016-01-01

    Using the wavelet-transform modulus maxima (WTMM) approach we study the dynamics of cerebral blood flow (CBF) in rats aiming to reveal responses of macro- and microcerebral circulations to changes in the peripheral blood pressure. We show that the wavelet-based multifractal formalism allows quantifying essentially different reactions in the CBF-dynamics at the level of large and small cerebral vessels. We conclude that unlike the macrocirculation that is nearly insensitive to increased peripheral blood pressure, the microcirculation is characterized by essential changes of the CBF-complexity.

  10. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity. PMID:16675027

  11. Template-free wavelet-based detection of local symmetries.

    PubMed

    Puspoki, Zsuzsanna; Unser, Michael

    2015-10-01

    Our goal is to detect and group different kinds of local symmetries in images in a scale- and rotation-invariant way. We propose an efficient wavelet-based method to determine the order of local symmetry at each location. Our algorithm relies on circular harmonic wavelets which are used to generate steerable wavelet channels corresponding to different symmetry orders. To give a measure of local symmetry, we use the F-test to examine the distribution of the energy across different channels. We provide experimental results on synthetic images, biological micrographs, and electron-microscopy images to demonstrate the performance of the algorithm. PMID:26011883

  12. Perceptually lossless wavelet-based compression for medical images

    NASA Astrophysics Data System (ADS)

    Lin, Nai-wen; Yu, Tsaifa; Chan, Andrew K.

    1997-05-01

    In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression ratio hierarchical tree to code these coefficients. Experimental results indicate that our perceptually lossless coder achieves a compression ratio 2-5 times higher than typical lossless compression schemes while producing perceptually identical image content on the target display device.

  13. Wavelet based characterization of ex vivo vertebral trabecular bone structure with 3T MRI compared to microCT

    SciTech Connect

    Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S

    2005-04-11

    Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.

  14. Concrete density estimation by rebound hammer method

    NASA Astrophysics Data System (ADS)

    Ismail, Mohamad Pauzi bin; Jefri, Muhamad Hafizie Bin; Abdullah, Mahadzir Bin; Masenwat, Noor Azreen bin; Sani, Suhairy bin; Mohd, Shukri; Isa, Nasharuddin bin; Mahmud, Mohamad Haniza bin

    2016-01-01

    Concrete is the most common and cheap material for radiation shielding. Compressive strength is the main parameter checked for determining concrete quality. However, for shielding purposes density is the parameter that needs to be considered. X- and -gamma radiations are effectively absorbed by a material with high atomic number and high density such as concrete. The high strength normally implies to higher density in concrete but this is not always true. This paper explains and discusses the correlation between rebound hammer testing and density for concrete containing hematite aggregates. A comparison is also made with normal concrete i.e. concrete containing crushed granite.

  15. A New Wavelet Based Approach to Assess Hydrological Models

    NASA Astrophysics Data System (ADS)

    Adamowski, J. F.; Rathinasamy, M.; Khosa, R.; Nalley, D.

    2014-12-01

    In this study, a new wavelet based multi-scale performance measure (Multiscale Nash Sutcliffe Criteria, and Multiscale Normalized Root Mean Square Error) for hydrological model comparison was developed and tested. The new measure provides a quantitative measure of model performance across different timescales. Model and observed time series are decomposed using the a trous wavelet transform, and performance measures of the model are obtained at each time scale. The usefulness of the new measure was tested using real as well as synthetic case studies. The real case studies included simulation results from the Soil Water Assessment Tool (SWAT), as well as statistical models (the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods). Data from India and Canada were used. The synthetic case studies included different kinds of errors (e.g., timing error, as well as under and over prediction of high and low flows) in outputs from a hydrologic model. It was found that the proposed wavelet based performance measures (i.e., MNSC and MNRMSE) are a more reliable measure than traditional performance measures such as the Nash Sutcliffe Criteria, Root Mean Square Error, and Normalized Root Mean Square Error. It was shown that the new measure can be used to compare different hydrological models, as well as help in model calibration.

  16. Wavelet-based moment invariants for pattern recognition

    NASA Astrophysics Data System (ADS)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  17. A wavelet-based approach to fall detection.

    PubMed

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the "prototype fall".In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  18. A Wavelet-Based Approach to Fall Detection

    PubMed Central

    Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo

    2015-01-01

    Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719

  19. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  20. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  1. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  2. Wavelet-based multifractal analysis of laser biopsy imagery

    NASA Astrophysics Data System (ADS)

    Jagtap, Jaidip; Ghosh, Sayantan; Panigrahi, Prasanta K.; Pradhan, Asima

    2012-03-01

    In this work, we report a wavelet based multi-fractal study of images of dysplastic and neoplastic HE- stained human cervical tissues captured in the transmission mode when illuminated by a laser light (He-Ne 632.8nm laser). It is well known that the morphological changes occurring during the progression of diseases like cancer manifest in their optical properties which can be probed for differentiating the various stages of cancer. Here, we use the multi-resolution properties of the wavelet transform to analyze the optical changes. For this, we have used a novel laser imagery technique which provides us with a composite image of the absorption by the different cellular organelles. As the disease progresses, due to the growth of new cells, the ratio of the organelle to cellular volume changes manifesting in the laser imagery of such tissues. In order to develop a metric that can quantify the changes in such systems, we make use of the wavelet-based fluctuation analysis. The changing self- similarity during disease progression can be well characterized by the Hurst exponent and the scaling exponent. Due to the use of the Daubechies' family of wavelet kernels, we can extract polynomial trends of different orders, which help us characterize the underlying processes effectively. In this study, we observe that the Hurst exponent decreases as the cancer progresses. This measure could be relatively used to differentiate between different stages of cancer which could lead to the development of a novel non-invasive method for cancer detection and characterization.

  3. Density estimation using the trapping web design: A geometric analysis

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    1994-01-01

    Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

  4. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  5. 2D wavelet transform with different adaptive wavelet bases for texture defect inspection based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Hong; Mo, Yu L.

    1998-08-01

    There are many textures such as woven fabrics having repeating Textron. In order to handle the textural characteristics of images with defects, this paper proposes a new method based on 2D wavelet transform. In the method, a new concept of different adaptive wavelet bases is used to match the texture pattern. The 2D wavelet transform has two different adaptive orthonormal wavelet bases for rows and columns which differ from Daubechies wavelet bases. The orthonormal wavelet bases for rows and columns are generated by genetic algorithm. The experiment result demonstrate the ability of the different adaptive wavelet bases to characterize the texture and locate the defects in the texture.

  6. Estimating Geometric Dislocation Densities in Polycrystalline Materialsfrom Orientation Imaging Microscopy

    SciTech Connect

    Man, Chi-Sing; Gao, Xiang; Godefroy, Scott; Kenik, Edward A

    2010-01-01

    Herein we consider polycrystalline materials which can be taken as statistically homogeneous and whose grains can be adequately modeled as rigid-plastic. Our objective is to obtain, from orientation imaging microscopy (OIM), estimates of geometrically necessary dislocation (GND) densities.

  7. Atmospheric density estimation using satellite precision orbit ephemerides

    NASA Astrophysics Data System (ADS)

    Arudra, Anoop Kumar

    The current atmospheric density models are not capable enough to accurately model the atmospheric density, which varies continuously in the upper atmosphere mainly due to the changes in solar and geomagnetic activity. Inaccurate atmospheric modeling results in erroneous density values that are not accurate enough to calculate the drag estimates acting on a satellite, thus leading to errors in the prediction of satellite orbits. This research utilized precision orbit ephemerides (POE) data from satellites in an orbit determination process to make corrections to existing atmospheric models, thus resulting in improved density estimates. The work done in this research made corrections to the Jacchia family atmospheric models and Mass Spectrometer Incoherent Scatter (MSIS) family atmospheric models using POE data from the Ice, Cloud and Land Elevation Satellite (ICESat) and the Terra Synthetic Aperture Radar-X Band (TerraSAR-X) satellite. The POE data obtained from these satellites was used in an orbit determination scheme which performs a sequential filter/smoother process to the measurements and generates corrections to the atmospheric models to estimate density. This research considered several days from the year 2001 to 2008 encompassing all levels of solar and geomagnetic activity. Density and ballistic coefficient half-lives with values of 1.8, 18, and 180 minutes were used in this research to observe the effect of these half-life combinations on density estimates. This research also examined the consistency of densities derived from the accelerometers of the Challenging Mini Satellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE) satellites by Eric Sutton, from the University of Colorado. The accelerometer densities derived by Sutton were compared with those derived by Sean Bruinsma from CNES, Department of Terrestrial and Planetary Geodesy, France. The Sutton densities proved to be nearly identical to the Bruinsma densities for all the

  8. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  9. Wavelet-based face verification for constrained platforms

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2005-03-01

    Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.

  10. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  11. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  12. Coarse-to-fine wavelet-based airport detection

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun

    2015-10-01

    Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.

  13. Complex wavelet based speckle reduction using multiple ultrasound images

    NASA Astrophysics Data System (ADS)

    Uddin, Muhammad Shahin; Tahtali, Murat; Pickering, Mark R.

    2014-04-01

    Ultrasound imaging is a dominant tool for diagnosis and evaluation in medical imaging systems. However, as its major limitation is that the images it produces suffer from low quality due to the presence of speckle noise, to provide better clinical diagnoses, reducing this noise is essential. The key purpose of a speckle reduction algorithm is to obtain a speckle-free high-quality image whilst preserving important anatomical features, such as sharp edges. As this can be better achieved using multiple ultrasound images rather than a single image, we introduce a complex wavelet-based algorithm for the speckle reduction and sharp edge preservation of two-dimensional (2D) ultrasound images using multiple ultrasound images. The proposed algorithm does not rely on straightforward averaging of multiple images but, rather, in each scale, overlapped wavelet detail coefficients are weighted using dynamic threshold values and then reconstructed by averaging. Validation of the proposed algorithm is carried out using simulated and real images with synthetic speckle noise and phantom data consisting of multiple ultrasound images, with the experimental results demonstrating that speckle noise is significantly reduced whilst sharp edges without discernible distortions are preserved. The proposed approach performs better both qualitatively and quantitatively than previous existing approaches.

  14. A wavelet-based feature vector model for DNA clustering.

    PubMed

    Bao, J P; Yuan, R Y

    2015-01-01

    DNA data are important in the bioinformatic domain. To extract useful information from the enormous collection of DNA sequences, DNA clustering is often adopted to efficiently deal with DNA data. The alignment-free method is a very popular way of creating feature vectors from DNA sequences, which are then used to compare DNA similarities. This paper proposes a wavelet-based feature vector (WFV) model, which is also an alignment-free method. From the perspective of signal processing, a DNA sequence is a sequence of digital signals. However, most traditional alignment-free models only extract features in the time domain. The WFV model uses discrete wavelet transform to adaptively yield feature vectors with a fixed dimension based on the features in both the time and frequency domains. The level of wavelet transform is adjusted according to the length of the DNA sequence rather than a fixed manually set value. The WFV model prefers a 32-dimension feature vector, which greatly promotes system performance. We compared the WFV model with the other five alignment-free models, i.e., k-tuple, DMK, TSM, AMI, and CV, on several large-scale DNA datasets on the DNA clustering application by means of the K-means algorithm. The experimental results showed that the WFV model outperformed the other models in terms of both the clustering results and the running time. PMID:26782569

  15. Wavelet-based laser-induced ultrasonic inspection in pipes

    NASA Astrophysics Data System (ADS)

    Baltazar-López, Martín E.; Suh, Steve; Chona, Ravinder; Burger, Christian P.

    2006-02-01

    The feasibility of detecting localized defects in tubing using Wavelet based laser-induced ultrasonic-guided waves as an inspection method is examined. Ultrasonic guided waves initiated and propagating in hollow cylinders (pipes and/or tubes) are studied as an alternative, robust nondestructive in situ inspection method. Contrary to other traditional methods for pipe inspection, in which contact transducers (electromagnetic, piezoelectric) and/or coupling media (submersion liquids) are used, this method is characterized by its non-contact nature. This characteristic is particularly important in applications involving Nondestructive Evaluation (NDE) of materials because the signal being detected corresponds only to the induced wave. Cylindrical guided waves are generated using a Q-switched Nd:YAG laser and a Fiber Tip Interferometry (FTI) system is used to acquire the waves. Guided wave experimental techniques are developed for the measurement of phase velocities to determine elastic properties of the material and the location and geometry of flaws including inclusions, voids, and cracks in hollow cylinders. As compared to the traditional bulk wave methods, the use of guided waves offers several important potential advantages. Some of which includes better inspection efficiency, the applicability to in-situ tube inspection, and fewer evaluation fluctuations with increased reliability.

  16. Wavelet-based multiresolution analysis of Wivenhoe Dam water temperatures

    NASA Astrophysics Data System (ADS)

    Percival, D. B.; Lennox, S. M.; Wang, Y.-G.; Darnell, R. E.

    2011-05-01

    Water temperature measurements from Wivenhoe Dam offer a unique opportunity for studying fluctuations of temperatures in a subtropical dam as a function of time and depth. Cursory examination of the data indicate a complicated structure across both time and depth. We propose simplifying the task of describing these data by breaking the time series at each depth into physically meaningful components that individually capture daily, subannual, and annual (DSA) variations. Precise definitions for each component are formulated in terms of a wavelet-based multiresolution analysis. The DSA components are approximately pairwise uncorrelated within a given depth and between different depths. They also satisfy an additive property in that their sum is exactly equal to the original time series. Each component is based upon a set of coefficients that decomposes the sample variance of each time series exactly across time and that can be used to study both time-varying variances of water temperature at each depth and time-varying correlations between temperatures at different depths. Each DSA component is amenable for studying a certain aspect of the relationship between the series at different depths. The daily component in general is weakly correlated between depths, including those that are adjacent to one another. The subannual component quantifies seasonal effects and in particular isolates phenomena associated with the thermocline, thus simplifying its study across time. The annual component can be used for a trend analysis. The descriptive analysis provided by the DSA decomposition is a useful precursor to a more formal statistical analysis.

  17. Mean thermospheric density estimation derived from satellite constellations

    NASA Astrophysics Data System (ADS)

    Li, Alan; Close, Sigrid

    2015-10-01

    This paper defines a method to estimate the mean neutral density of the thermosphere given many satellites of the same form factor travelling in similar regions of space. A priori information to the estimation scheme include ranging measurements and a general knowledge of the onboard ADACS, although precise measurements are not required for the latter. The estimation procedure seeks to utilize order statistics to estimate the probability of the minimum drag coefficient achievable, and amalgamating all measurements across multiple time periods allows estimation of the probability density of the ballistic factor itself. The model does not depend on prior models of the atmosphere; instead we require estimation of the minimum achievable drag coefficient which is based upon physics models of simple shapes in free molecular flow. From the statistics of the minimum, error statistics on the estimated atmospheric density can be calculated. Barring measurement errors from the ranging procedure itself, it is shown that with a constellation of 10 satellites, we can achieve a standard deviation of roughly 4% on the estimated mean neutral density. As more satellites are added to the constellation, the result converges towards the lower limit of the achievable drag coefficient, and accuracy becomes limited by the quality of the ranging measurements and the probability of the accommodation coefficient. Comparisons are made to existing atmospheric models such as NRLMSISE-00 and JB2006.

  18. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  19. The estimation of body density in rugby union football players.

    PubMed Central

    Bell, W

    1995-01-01

    The general regression equation of Durnin and Womersley for estimating body density from skinfold thicknesses in young men, was examined by comparing the estimated density from this equation, with the measured density of a group of 45 rugby union players of similar age. Body density was measured by hydrostatic weighing with simultaneous measurement of residual volume. Additional measurements included stature, body mass and skinfold thicknesses at the biceps, triceps, subscapular and suprailiac sites. The estimated density was significantly different from the measured density (P < 0.001), equivalent to a mean overestimation of relative fat of approximately 4%. A new set of prediction equations for estimating density was formulated from linear regression using the logarithm of single and sums of skinfold thicknesses. Equations were derived from a validation sample (n = 22) and tested on a crossvalidation sample (n = 23). The standard error of the estimate (s.e.e.) of the equations ranged from 0.0058 to 0.0062 g ml-1. The derived equations were successfully crossvalidated. Differences between measured and estimated densities were not significant (P > 0.05), total errors ranging from 0.0067 to 0.0092 g ml-1. An exploratory assessment was also made of the effect of fatness and aerobic fitness on the prediction equations. The equations should be applied to players of similar age and playing ability, and for the purpose of identifying group characteristics. Application of the equations to individuals may give rise to errors of between -3.9% to +2.5% total body fat in two-thirds of cases. PMID:7788218

  20. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  1. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  2. A wavelet-based approach to face verification/recognition

    NASA Astrophysics Data System (ADS)

    Jassim, Sabah; Sellahewa, Harin

    2005-10-01

    Face verification/recognition is a tough challenge in comparison to identification based on other biometrics such as iris, or fingerprints. Yet, due to its unobtrusive nature, the face is naturally suitable for security related applications. Face verification process relies on feature extraction from face images. Current schemes are either geometric-based or template-based. In the latter, the face image is statistically analysed to obtain a set of feature vectors that best describe it. Performance of a face verification system is affected by image variations due to illumination, pose, occlusion, expressions and scale. This paper extends our recent work on face verification for constrained platforms, where the feature vector of a face image is the coefficients in the wavelet transformed LL-subbands at depth 3 or more. It was demonstrated that the wavelet-only feature vector scheme has a comparable performance to sophisticated state-of-the-art when tested on two benchmark databases (ORL, and BANCA). The significance of those results stem from the fact that the size of the k-th LL- subband is 1/4k of the original image size. Here, we investigate the use of wavelet coefficients in various subbands at level 3 or 4 using various wavelet filters. We shall compare the performance of the wavelet-based scheme for different filters at different subbands with a number of state-of-the-art face verification/recognition schemes on two benchmark databases, namely ORL and the control section of BANCA. We shall demonstrate that our schemes have comparable performance to (or outperform) the best performing other schemes.

  3. Wavelet-based ground vehicle recognition using acoustic signals

    NASA Astrophysics Data System (ADS)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  4. Wavelet-based multicomponent matching pursuit trace interpolation

    NASA Astrophysics Data System (ADS)

    Choi, Jihun; Byun, Joongmoo; Seol, Soon Jee; Kim, Young

    2016-06-01

    Typically, seismic data are sparsely and irregularly sampled due to limitations in the survey environment and these cause problems for key seismic processing steps such as surface-related multiple elimination or wave-equation based migration. Various interpolation techniques have been developed to alleviate the problems caused by sparse and irregular sampling. Among many interpolation techniques, matching pursuit interpolation is a robust tool to interpolate the regularly sampled data with large receiver separation such as crossline data in marine seismic acquisition when both pressure and particle velocity data are used. Multi-component matching pursuit methods generally used the sinusoidal basis function, which have shown to be effective for interpolating multi-component marine seismic data in the crossline direction. In this paper, we report the use of wavelet basis functions which further enhances the performance of matching pursuit methods for de-aliasing than sinusoidal basis functions. We also found that the range of the peak wavenumber of the wavelet is critical to the stability of the interpolation results and the de-aliasing performance and that the range should be determined based on Nyquist criteria. In addition, we reduced the computational cost by adopting the inner product of the wavelet and the input data to find the parameters of the wavelet basis function instead of using L-2 norm minimization. Using synthetic data, we illustrate that for aliased data, wavelet-based matching pursuit interpolation yields more stable results than sinusoidal function-based one when we use not only pressure data only but also both pressure and particle velocity together.

  5. Wavelet-based multicomponent matching pursuit trace interpolation

    NASA Astrophysics Data System (ADS)

    Choi, Jihun; Byun, Joongmoo; Seol, Soon Jee; Kim, Young

    2016-09-01

    Typically, seismic data are sparsely and irregularly sampled due to limitations in the survey environment and these cause problems for key seismic processing steps such as surface-related multiple elimination or wave-equation-based migration. Various interpolation techniques have been developed to alleviate the problems caused by sparse and irregular sampling. Among many interpolation techniques, matching pursuit interpolation is a robust tool to interpolate the regularly sampled data with large receiver separation such as crossline data in marine seismic acquisition when both pressure and particle velocity data are used. Multicomponent matching pursuit methods generally used the sinusoidal basis function, which have shown to be effective for interpolating multicomponent marine seismic data in the crossline direction. In this paper, we report the use of wavelet basis functions which further enhances the performance of matching pursuit methods for de-aliasing than sinusoidal basis functions. We also found that the range of the peak wavenumber of the wavelet is critical to the stability of the interpolation results and the de-aliasing performance and that the range should be determined based on Nyquist criteria. In addition, we reduced the computational cost by adopting the inner product of the wavelet and the input data to find the parameters of the wavelet basis function instead of using L-2 norm minimization. Using synthetic data, we illustrate that for aliased data, wavelet-based matching pursuit interpolation yields more stable results than sinusoidal function-based one when we use not only pressure data only but also both pressure and particle velocity together.

  6. Atmospheric Density Corrections Estimated from Fitted Drag Coefficients

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T. F.; Mance, S. R.; Mehta, P.

    2010-12-01

    Fitted drag coefficients estimated using GEODYN, the NASA Goddard Space Flight Center Precision Orbit Determination and Geodetic Parameter Estimation Program, are used to create density corrections. The drag coefficients were estimated for Stella, Starlette and GFZ using satellite laser ranging (SLR) measurements; and for GEOSAT Follow-On (GFO) using SLR, Doppler, and altimeter crossover measurements. The data analyzed covers years ranging from 2000 to 2004 for Stella and Starlette, 2000 to 2002 and 2005 for GFO, and 1995 to 1997 for GFZ. The drag coefficient was estimated every eight hours. The drag coefficients over the course of a year show a consistent variation about the theoretical and yearly average values that primarily represents a semi-annual/seasonal error in the atmospheric density models used. The atmospheric density models examined were NRLMSISE-00 and MSIS-86. The annual structure of the major variations was consistent among all the satellites for a given year and consistent among all the years examined. The fitted drag coefficients can be converted into density corrections every eight hours along the orbit of the satellites. In addition, drag coefficients estimated more frequently can provide a higher frequency of density correction.

  7. Non-local crime density estimation incorporating housing information

    PubMed Central

    Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

    2014-01-01

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  8. NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES

    PubMed Central

    Seregin, Arseni; Wellner, Jon A.

    2011-01-01

    We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ∈ ℝd and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e−y for y ∈ ℝ; in this case, the resulting class of densities P(e−y)={p=exp(−g):gis convex}is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimator p̂ exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y). We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e−y) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x0 under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. PMID:21423877

  9. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies--both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the

  10. Quantiles, parametric-select density estimation, and bi-information parameter estimators

    NASA Technical Reports Server (NTRS)

    Parzen, E.

    1982-01-01

    A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.

  11. A new approach for estimating the density of liquids.

    PubMed

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-01

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed. PMID:27494268

  12. An Infrastructureless Approach to Estimate Vehicular Density in Urban Environments

    PubMed Central

    Sanguesa, Julio A.; Fogue, Manuel; Garrido, Piedad; Martinez, Francisco J.; Cano, Juan-Carlos; Calafate, Carlos T.; Manzoni, Pietro

    2013-01-01

    In Vehicular Networks, communication success usually depends on the density of vehicles, since a higher density allows having shorter and more reliable wireless links. Thus, knowing the density of vehicles in a vehicular communications environment is important, as better opportunities for wireless communication can show up. However, vehicle density is highly variable in time and space. This paper deals with the importance of predicting the density of vehicles in vehicular environments to take decisions for enhancing the dissemination of warning messages between vehicles. We propose a novel mechanism to estimate the vehicular density in urban environments. Our mechanism uses as input parameters the number of beacons received per vehicle, and the topological characteristics of the environment where the vehicles are located. Simulation results indicate that, unlike previous proposals solely based on the number of beacons received, our approach is able to accurately estimate the vehicular density, and therefore it could support more efficient dissemination protocols for vehicular environments, as well as improve previously proposed schemes. PMID:23435054

  13. ENSO forecast using a wavelet-based decomposition

    NASA Astrophysics Data System (ADS)

    Deliège, Adrien; Nicolay, Samuel; Fettweis, Xavier

    2015-04-01

    The aim of this work is to introduce a new method for forecasting major El Niño/ La Niña events with the use of a wavelet-based mode decomposition. These major events are related to sea surface temperature anomalies in the tropical Pacific Ocean: anomalous warmings are known as El Niño events, while excessive coolings are referred as La Niña episodes. These climatological phenomena are of primary importance since they are involved in many teleconnections; predicting them long before they occur is therefore a crucial concern. First, we perform a wavelet transform (WT) of the monthly sampled El Niño Southern Oscillation 3.4 index (from 1950 to present) and compute the associated scale spectrum, which can be seen as the energy carried in the WT as a function of the scale. It can be observed that the spectrum reaches five peaks, corresponding to time scales of about 7, 20, 31, 43 and 61 months respectively. Therefore, the Niño 3.4 signal can be decomposed into five dominant oscillating components with time-varying amplitudes, these latter being given by the modulus of the WT at the associated pseudo-periods. The reconstruction of the index based on these five components is accurate since more than 93% of the El Niño/ La Niña events of the last 60 years are recovered and no major event is erroneously predicted. Then, the components are smoothly extrapolated using polynomials and added together, giving so several years forecasts of the Niño 3.4 index. In order to increase the reliability of the forecasts, we perform several months hindcasts (i.e. retroactive probing forecasts) which can be validated with the existing data. It turns out that most of the major events can be accurately predicted up to three years in advance, which makes our methodology competitive for such forecasts. Finally, we discuss the El Niño conditions currently undergone and give indications about the next La Niña event.

  14. Double sampling to estimate density and population trends in birds

    USGS Publications Warehouse

    Bart, Jonathan; Earnst, Susan L.

    2002-01-01

    We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

  15. Multibaseline polarimetric synthetic aperture radar tomography of forested areas using wavelet-based distribution compressive sensing

    NASA Astrophysics Data System (ADS)

    Liang, Lei; Li, Xinwu; Gao, Xizhang; Guo, Huadong

    2015-01-01

    The three-dimensional (3-D) structure of forests, especially the vertical structure, is an important parameter of forest ecosystem modeling for monitoring ecological change. Synthetic aperture radar tomography (TomoSAR) provides scene reflectivity estimation of vegetation along elevation coordinates. Due to the advantages of super-resolution imaging and a small number of measurements, distribution compressive sensing (DCS) inversion techniques for polarimetric SAR tomography were successfully developed and applied. This paper addresses the 3-D imaging of forested areas based on the framework of DCS using fully polarimetric (FP) multibaseline SAR interferometric (MB-InSAR) tomography at the P-band. A new DCS-based FP TomoSAR method is proposed: a new wavelet-based distributed compressive sensing FP TomoSAR method (FP-WDCS TomoSAR method). The method takes advantage of the joint sparsity between polarimetric channel signals in the wavelet domain to jointly inverse the reflectivity profiles in each channel. The method not only allows high accuracy and super-resolution imaging with a low number of acquisitions, but can also obtain the polarization information of the vertical structure of forested areas. The effectiveness of the techniques for polarimetric SAR tomography is demonstrated using FP P-band airborne datasets acquired by the ONERA SETHI airborne system over a test site in Paracou, French Guiana.

  16. Comparison of neuron selection algorithms of wavelet-based neural network

    NASA Astrophysics Data System (ADS)

    Mei, Xiaodan; Sun, Sheng-He

    2001-09-01

    Wavelet networks have increasingly received considerable attention in various fields such as signal processing, pattern recognition, robotics and automatic control. Recently people are interested in employing wavelet functions as activation functions and have obtained some satisfying results in approximating and localizing signals. However, the function estimation will become more and more complex with the growth of the input dimension. The hidden neurons contribute to minimize the approximation error, so it is important to study suitable algorithms for neuron selection. It is obvious that exhaustive search procedure is not satisfying when the number of neurons is large. The study in this paper focus on what type of selection algorithm has faster convergence speed and less error for signal approximation. Therefore, the Genetic algorithm and the Tabu Search algorithm are studied and compared by some experiments. This paper first presents the structure of the wavelet-based neural network, then introduces these two selection algorithms and discusses their properties and learning processes, and analyzes the experiments and results. We used two wavelet functions to test these two algorithms. The experiments show that the Tabu Search selection algorithm's performance is better than the Genetic selection algorithm, TSA has faster convergence rate than GA under the same stopping criterion.

  17. Resolution enhancement of composite spectra using wavelet-based derivative spectrometry.

    PubMed

    Kharintsev, S S; Kamalova, D I; Salakhov, M Kh; Sevastianov, A A

    2005-01-01

    An approach based on the using of the continuous wavelet transform (CWT) in derivative spectrometry (DS) is considered. Within the framework of the approach we develop a numerical differentiation algorithm with continuous wavelets for improving resolution of composite spectra. The wavelet-based derivative spectrometry (WDS) method results in best contrast in differential curves compared to the conventional derivative spectrometry method. A main advantage is that, as opposed to DS, WDS gives stable estimations of derivative in the wavelet domain without using the regularization. A wavelet shape and the information redundancy are of the greatest importance when the continuous wavelet transform is used. As an appropriate wavelet we offer to utilize the nth derivative of a component with a priori known shape. The energy distribution into scales allows one to determine a unique wavelet projection and in that way to avoid the information redundancy. A comparative study of WDS and DS with the statistical regularization method (SRM) is made; in particular, limits of applicability of these are given. Examples of the application of both DS and WDS for improving resolution of synthetic composite bands and real-world composite ones coming from molecular spectroscopy are given. PMID:15556433

  18. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  19. Face Value: Towards Robust Estimates of Snow Leopard Densities

    PubMed Central

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682

  20. Density estimation in tiger populations: combining information for strong inference

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  1. Inversion Theorem Based Kernel Density Estimation for the Ordinary Least Squares Estimator of a Regression Coefficient

    PubMed Central

    Wang, Dongliang; Hutson, Alan D.

    2016-01-01

    The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs. The proposed approach is illustrated via an application to a classic small data set originally from Graybill (1961). PMID:26924882

  2. Ionospheric electron density profile estimation using commercial AM broadcast signals

    NASA Astrophysics Data System (ADS)

    Yu, De; Ma, Hong; Cheng, Li; Li, Yang; Zhang, Yufeng; Chen, Wenjun

    2015-08-01

    A new method for estimating the bottom electron density profile by using commercial AM broadcast signals as non-cooperative signals is presented in this paper. Without requiring any dedicated transmitters, the required input data are the measured elevation angles of signals transmitted from the known locations of broadcast stations. The input data are inverted for the QPS model parameters depicting the electron density profile of the signal's reflection area by using a probabilistic inversion technique. This method has been validated on synthesized data and used with the real data provided by an HF direction-finding system situated near the city of Wuhan. The estimated parameters obtained by the proposed method have been compared with vertical ionosonde data and have been used to locate the Shijiazhuang broadcast station. The simulation and experimental results indicate that the proposed ionospheric sounding method is feasible for obtaining useful electron density profiles.

  3. Density estimates for deep-sea gastropod assemblages

    NASA Astrophysics Data System (ADS)

    Rex, Michael A.; Etter, Ron J.; Nimeskern, Phillip W.

    1990-04-01

    Extensive boxcore sampling in the Atlantic Continental Slope and Rise study permitted the first precise measurement of gastropod density in the bathyal region of the deep sea. Gastropod density decreases significantly and exponentially with depth (250-3494 m), and density-depth regression lines do not differ significantly in either slope or elevatiob over horizontal scales of approximately 1000 km. The subclasses Prosobranchia and Opisthobranchia both show significant decreases in density with depth. Predatory taxa (neogastropods and opisthobranchs) exhibit significantly steeper declines in density with depth than do taxa dominated by deposit feeders (archaeogastropods and mesogastropods). Members of upper trophic levels may be more sensitive to the reduction in nutrient input with increased depth because of the energy loss between trophic levels in the food chain. A comparison of density estimates of gastropods from boxcore, grab and anchor-dredge samples taken in the same region revealed no significant differences in density-depth relationships among the sampling methods. A synthesis of data from 777 boxcore samples collected from the Atlantic, Caribbean and Pacific over a depth range of 250-7298 m indicates that the decline in gastropod density with depth is a global trend with only moderate inter-regional variation.

  4. Evaluating lidar point densities for effective estimation of aboveground biomass

    USGS Publications Warehouse

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  5. Estimating Density Gradients and Drivers from 3D Ionospheric Imaging

    NASA Astrophysics Data System (ADS)

    Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

    2009-12-01

    The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007

  6. Estimation of Enceladus Plume Density Using Cassini Flight Data

    NASA Technical Reports Server (NTRS)

    Wang, Eric K.; Lee, Allan Y.

    2011-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of water vapor plumes in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. During some of these Enceladus flybys, the spacecraft attitude was controlled by a set of three reaction wheels. When the disturbance torque imparted on the spacecraft was predicted to exceed the control authority of the reaction wheels, thrusters were used to control the spacecraft attitude. Using telemetry data of reaction wheel rates or thruster on-times collected from four low-altitude Enceladus flybys (in 2008-10), one can reconstruct the time histories of the Enceladus plume jet density. The 1 sigma uncertainty of the estimated density is 5.9-6.7% (depending on the density estimation methodology employed). These plume density estimates could be used to confirm measurements made by other onboard science instruments and to support the modeling of Enceladus plume jets.

  7. Quantitative volumetric breast density estimation using phase contrast mammography

    NASA Astrophysics Data System (ADS)

    Wang, Zhentian; Hauser, Nik; Kubik-Huch, Rahel A.; D'Isidoro, Fabio; Stampanoni, Marco

    2015-05-01

    Phase contrast mammography using a grating interferometer is an emerging technology for breast imaging. It provides complementary information to the conventional absorption-based methods. Additional diagnostic values could be further obtained by retrieving quantitative information from the three physical signals (absorption, differential phase and small-angle scattering) yielded simultaneously. We report a non-parametric quantitative volumetric breast density estimation method by exploiting the ratio (dubbed the R value) of the absorption signal to the small-angle scattering signal. The R value is used to determine breast composition and the volumetric breast density (VBD) of the whole breast is obtained analytically by deducing the relationship between the R value and the pixel-wise breast density. The proposed method is tested by a phantom study and a group of 27 mastectomy samples. In the clinical evaluation, the estimated VBD values from both cranio-caudal (CC) and anterior-posterior (AP) views are compared with the ACR scores given by radiologists to the pre-surgical mammograms. The results show that the estimated VBD results using the proposed method are consistent with the pre-surgical ACR scores, indicating the effectiveness of this method in breast density estimation. A positive correlation is found between the estimated VBD and the diagnostic ACR score for both the CC view (p=0.033 ) and AP view (p=0.001 ). A linear regression between the results of the CC view and AP view showed a correlation coefficient γ = 0.77, which indicates the robustness of the proposed method and the quantitative character of the additional information obtained with our approach.

  8. Wavelet Based Analytical Expressions to Steady State Biofilm Model Arising in Biochemical Engineering.

    PubMed

    Padma, S; Hariharan, G

    2016-06-01

    In this paper, we have developed an efficient wavelet based approximation method to biofilm model under steady state arising in enzyme kinetics. Chebyshev wavelet based approximation method is successfully introduced in solving nonlinear steady state biofilm reaction model. To the best of our knowledge, until now there is no rigorous wavelet based solution has been addressed for the proposed model. Analytical solutions for substrate concentration have been derived for all values of the parameters δ and SL. The power of the manageable method is confirmed. Some numerical examples are presented to demonstrate the validity and applicability of the wavelet method. Moreover the use of Chebyshev wavelets is found to be simple, efficient, flexible, convenient, small computation costs and computationally attractive. PMID:26661721

  9. Scatterer Number Density Considerations in Reference Phantom Based Attenuation Estimation

    PubMed Central

    Rubert, Nicholas; Varghese, Tomy

    2014-01-01

    Attenuation estimation and imaging has the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and an SNR approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1,647 ROI's in 5 ex vivo bovine livers we find an envelope SNR of 1.10 ± 0.12 when imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article we examine attenuation estimation in numerical phantoms, TM phantoms with variable SND's, and ex vivo bovine liver prior to and following thermal coagulation. We find that reference phantom based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SND, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find the standard deviation of attenuation slope estimates increases from 0.07 dB/cm MHz to 0.25 dB/cm MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in TM phantoms with a large estimation kernel size (16 mm axially by 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (< 0.01 dB/cm MHz). We also compare results obtained with reference phantom based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800

  10. Can modeling improve estimation of desert tortoise population densities?

    USGS Publications Warehouse

    Nussear, K.E.; Tracy, C.R.

    2007-01-01

    The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.

  11. Density Estimation of Comet 103P/Hartley 2

    NASA Astrophysics Data System (ADS)

    Bowling, T.; Richardson, J.; Melosh, J.; Thomas, P.

    2011-10-01

    Our analysis was constrained to the region of the neck that was directly imaged and well illuminated during the encounter. A homogeneous density and a rotation period of 18.34 hours are assumed. Only rotation about the principal axis was accounted for. The principal rotation period was likely shorter on timescales effective for surface modification [2]. Additionally, spin components about minor axes introduce a further degree of error. A global minimum is found for a bulk density ? = 220 kg m-3 (one sigma = 130-620 kg m-3) which corresponds to a comet mass of m = 1.84 x 1011 kg (one sigma = 1.51-5.18 x 1011 kg). This is lower than, but within error ranges of, previous comet density estimates (sec. 4.2 of [3]).

  12. Estimating black bear density using DNA data from hair snares

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

    2010-01-01

    DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

  13. Density estimators in particle hydrodynamics. DTFE versus regular SPH

    NASA Astrophysics Data System (ADS)

    Pelupessy, F. I.; Schaap, W. E.; van de Weygaert, R.

    2003-05-01

    We present the results of a study comparing density maps reconstructed by the Delaunay Tessellation Field Estimator (DTFE) and by regular SPH kernel-based techniques. The density maps are constructed from the outcome of an SPH particle hydrodynamics simulation of a multiphase interstellar medium. The comparison between the two methods clearly demonstrates the superior performance of the DTFE with respect to conventional SPH methods, in particular at locations where SPH appears to fail. Filamentary and sheetlike structures form telling examples. The DTFE is a fully self-adaptive technique for reconstructing continuous density fields from discrete particle distributions, and is based upon the corresponding Delaunay tessellation. Its principal asset is its complete independence of arbitrary smoothing functions and parameters specifying the properties of these. As a result it manages to faithfully reproduce the anisotropies of the local particle distribution and through its adaptive and local nature proves to be optimally suited for uncovering the full structural richness in the density distribution. Through the improvement in local density estimates, calculations invoking the DTFE will yield a much better representation of physical processes which depend on density. This will be crucial in the case of feedback processes, which play a major role in galaxy and star formation. The presented results form an encouraging step towards the application and insertion of the DTFE in astrophysical hydrocodes. We describe an outline for the construction of a particle hydrodynamics code in which the DTFE replaces kernel-based methods. Further discussion addresses the issue and possibilities for a moving grid-based hydrocode invoking the DTFE, and Delaunay tessellations, in an attempt to combine the virtues of the Eulerian and Lagrangian approaches.

  14. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  15. Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds

    NASA Astrophysics Data System (ADS)

    Arshakian, Tigran G.; Ossenkopf, Volker

    2016-01-01

    Aims: We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well-defined spatial properties, we calibrate the method and test its limitations to apply it ultimately to a set of observed maps. Methods: We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. Results: We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behaviour without prominent scales. Observational noise and a finite map size limit the scales on which the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from observational noise is negligible for signal-to-noise ratios ≳5. We propose an approach for the identification of correlated structures in the maps, which allows us to localize individual correlated structures and recognize their shapes and suggest a recipe for recovering enhanced scales in self-similar structures. The application of the WWCC to the observed line maps of the giant molecular cloud G 333 allows us to add specific scale information to the results obtained earlier using the principal component analysis. The WWCC confirms the chemical and excitation similarity of 13CO and C18O on all scales, but shows a deviation of HCN at scales of up to 7 pc. This can be interpreted as a chemical transition scale. The largest structures also show a systematic offset along the filament, probably due to a large-scale density gradient. Conclusions: The WWCC can compare correlated structures in different maps of molecular clouds identifying scales that represent structural changes, such as chemical and phase transitions

  16. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of

  17. Distributed density estimation in sensor networks based on variational approximations

    NASA Astrophysics Data System (ADS)

    Safarinejadian, Behrooz; Menhaj, Mohammad B.

    2011-09-01

    This article presents a peer-to-peer (P2P) distributed variational Bayesian (P2PDVB) algorithm for density estimation and clustering in sensor networks. It is assumed that measurements of the nodes can be statistically modelled by a common Gaussian mixture model. The variational approach allows the simultaneous estimate of the component parameters and the model complexity. In this algorithm, each node independently calculates local sufficient statistics first by using local observations. A P2P averaging approach is then used to diffuse local sufficient statistics to neighbours and estimate global sufficient statistics in each node. Finally, each sensor node uses the estimated global sufficient statistics to estimate the model order as well as the parameters of this model. Because the P2P averaging approach only requires that each node communicate with its neighbours, the P2PDVB algorithm is scalable and robust. Diffusion speed and convergence of the proposed algorithm are also studied. Finally, simulated and real data sets are used to verify the remarkable performance of proposed algorithm.

  18. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  19. Effect of packing density on strain estimation by Fry method

    NASA Astrophysics Data System (ADS)

    Srivastava, Deepak; Ojha, Arun

    2015-04-01

    Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a

  20. Dust-cloud density estimation using a single wavelength lidar

    NASA Astrophysics Data System (ADS)

    Youmans, Douglas G.; Garner, Richard C.; Petersen, Kent R.

    1994-09-01

    The passage of commercial and military aircraft through invisible fresh volcanic ash clouds has caused damage to many airplanes. On December 15, 1989 all four engines of a KLM Boeing 747 were temporarily extinguished in a flight over Alaska resulting in $DOL80 million for repair. Similar aircraft damage to control systems, FLIR/EO windows, wind screens, radomes, aircraft leading edges, and aircraft data systems were reported in Operation Desert Storm during combat flights through high-explosive and naturally occurring desert dusts. The Defense Nuclear Agency is currently developing a compact and rugged lidar under the Aircraft Sensors Program to detect and estimate the mass density of nuclear-explosion produced dust clouds, high-explosive produced dust clouds, and fresh volcanic dust clouds at horizontal distances of up to 40 km from an aircraft. Given this mass density information, the pilot has an option of avoiding or flying through the upcoming cloud.

  1. Estimation of Volumetric Breast Density from Digital Mammograms

    NASA Astrophysics Data System (ADS)

    Alonzo-Proulx, Olivier

    Mammographic breast density (MBD) is a strong risk factor for developing breast cancer. MBD is typically estimated by manually selecting the area occupied by the dense tissue on a mammogram. There is interest in measuring the volume of dense tissue, or volumetric breast density (VBD), as it could potentially be a stronger risk factor. This dissertation presents and validates an algorithm to measure the VBD from digital mammograms. The algorithm is based on an empirical calibration of the mammography system, supplemented by physical modeling of x-ray imaging that includes the effects of beam polychromaticity, scattered radation, anti-scatter grid and detector glare. It also includes a method to estimate the compressed breast thickness as a function of the compression force, and a method to estimate the thickness of the breast outside of the compressed region. The algorithm was tested on 26 simulated mammograms obtained from computed tomography images, themselves deformed to mimic the effects of compression. This allowed the determination of the baseline accuracy of the algorithm. The algorithm was also used on 55 087 clinical digital mammograms, which allowed for the determination of the general characteristics of VBD and breast volume, as well as their variation as a function of age and time. The algorithm was also validated against a set of 80 magnetic resonance images, and compared against the area method on 2688 images. A preliminary study comparing association of breast cancer risk with VBD and MBD was also performed, indicating that VBD is a stronger risk factor. The algorithm was found to be accurate, generating quantitative density measurements rapidly and automatically. It can be extended to any digital mammography system, provided that the compression thickness of the breast can be determined accurately.

  2. Multivariate mixtures of Erlangs for density estimation under censoring.

    PubMed

    Verbelen, Roel; Antonio, Katrien; Claeskens, Gerda

    2016-07-01

    Multivariate mixtures of Erlang distributions form a versatile, yet analytically tractable, class of distributions making them suitable for multivariate density estimation. We present a flexible and effective fitting procedure for multivariate mixtures of Erlangs, which iteratively uses the EM algorithm, by introducing a computationally efficient initialization and adjustment strategy for the shape parameter vectors. We furthermore extend the EM algorithm for multivariate mixtures of Erlangs to be able to deal with randomly censored and fixed truncated data. The effectiveness of the proposed algorithm is demonstrated on simulated as well as real data sets. PMID:26340888

  3. Nonparametric estimation of multivariate scale mixtures of uniform densities

    PubMed Central

    Pavlides, Marios G.; Wellner, Jon A.

    2012-01-01

    Suppose that U = (U1, … , Ud) has a Uniform ([0, 1]d) distribution, that Y = (Y1, … , Yd) has the distribution G on R+d, and let X = (X1, … , Xd) = (U1Y1, … , UdYd). The resulting class of distributions of X (as G varies over all distributions on R+d) is called the Scale Mixture of Uniforms class of distributions, and the corresponding class of densities on R+d is denoted by FSMU(d). We study maximum likelihood estimation in the family FSMU(d). We prove existence of the MLE, establish Fenchel characterizations, and prove strong consistency of the almost surely unique maximum likelihood estimator (MLE) in FSMU(d). We also provide an asymptotic minimax lower bound for estimating the functional f ↦ f(x) under reasonable differentiability assumptions on f ∈ FSMU(d) in a neighborhood of x. We conclude the paper with discussion, conjectures and open problems pertaining to global and local rates of convergence of the MLE. PMID:22485055

  4. Hierarchical Multiscale Adaptive Variable Fidelity Wavelet-based Turbulence Modeling with Lagrangian Spatially Variable Thresholding

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza

    The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work

  5. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  6. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  7. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGESBeta

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  8. Estimating tropical-forest density profiles from multibaseline interferometric SAR

    NASA Technical Reports Server (NTRS)

    Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio

    2006-01-01

    Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.

  9. The effectiveness of tape playbacks in estimating Black Rail densities

    USGS Publications Warehouse

    Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

  10. Cortical cell and neuron density estimates in one chimpanzee hemisphere.

    PubMed

    Collins, Christine E; Turner, Emily C; Sawyer, Eva Kille; Reed, Jamie L; Young, Nicole A; Flaherty, David K; Kaas, Jon H

    2016-01-19

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm(2) of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates. PMID:26729880

  11. Cortical cell and neuron density estimates in one chimpanzee hemisphere

    PubMed Central

    Collins, Christine E.; Turner, Emily C.; Sawyer, Eva Kille; Reed, Jamie L.; Young, Nicole A.; Flaherty, David K.; Kaas, Jon H.

    2016-01-01

    The density of cells and neurons in the neocortex of many mammals varies across cortical areas and regions. This variability is, perhaps, most pronounced in primates. Nonuniformity in the composition of cortex suggests regions of the cortex have different specializations. Specifically, regions with densely packed neurons contain smaller neurons that are activated by relatively few inputs, thereby preserving information, whereas regions that are less densely packed have larger neurons that have more integrative functions. Here we present the numbers of cells and neurons for 742 discrete locations across the neocortex in a chimpanzee. Using isotropic fractionation and flow fractionation methods for cell and neuron counts, we estimate that neocortex of one hemisphere contains 9.5 billion cells and 3.7 billion neurons. Primary visual cortex occupies 35 cm2 of surface, 10% of the total, and contains 737 million densely packed neurons, 20% of the total neurons contained within the hemisphere. Other areas of high neuron packing include secondary visual areas, somatosensory cortex, and prefrontal granular cortex. Areas of low levels of neuron packing density include motor and premotor cortex. These values reflect those obtained from more limited samples of cortex in humans and other primates. PMID:26729880

  12. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    NASA Astrophysics Data System (ADS)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbieć, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  13. Joint wavelet-based coding and packetization for video transport over packet-switched networks

    NASA Astrophysics Data System (ADS)

    Lee, Hung-ju

    1996-02-01

    In recent years, wavelet theory applied to image, and audio and video compression has been extensively studied. However, only gaining compression ratio without considering the underlying networking systems is unrealistic, especially for multimedia applications over networks. In this paper, we present an integrated approach, which attempts to preserve the advantages of wavelet-based image coding scheme and to provide robustness to a certain extent for lost packets over packet-switched networks. Two different packetization schemes, called the intrablock-oriented (IAB) and interblock-oriented (IRB) schemes, in conjunction with wavelet-based coding, are presented. Our approach is evaluated under two different packet loss models with various packet loss probabilities through simulations which are driven by real video sequences.

  14. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGESBeta

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; Gur, Sourav; Danielson, Thomas L.; Hin, Celine N.; Pannala, Sreekanth; Frantziskonis, George N.

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  15. Hierarchical wavelet-based image model for pattern analysis and synthesis

    NASA Astrophysics Data System (ADS)

    Scott, Clayton D.; Nowak, Robert D.

    2000-12-01

    Despite their success in other areas of statistical signal processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown pattern transformations. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR, a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length complexity regularization to learn a template with a sparse representation in the wavelet domain. We illustrate template learning with examples, and discuss how TEMPLAR applies to pattern classification and denoising from multiple, unaligned observations.

  16. Estimating Foreign-Object-Debris Density from Photogrammetry Data

    NASA Technical Reports Server (NTRS)

    Long, Jason; Metzger, Philip; Lane, John

    2013-01-01

    Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.

  17. Research of the wavelet based ECW remote sensing image compression technology

    NASA Astrophysics Data System (ADS)

    Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua

    2007-11-01

    This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.

  18. Recognition of short-term changes in physiological signals with the wavelet-based multifractal formalism

    NASA Astrophysics Data System (ADS)

    Pavlov, Alexey N.; Sindeeva, Olga A.; Sindeev, Sergey S.; Pavlova, Olga N.; Rybalova, Elena V.; Semyachkina-Glushkovskaya, Oxana V.

    2016-03-01

    In this paper we address the problem of revealing and recognition transitions between distinct physiological states using quite short fragments of experimental recordings. With the wavelet-based multifractal analysis we characterize changes of complexity and correlation properties in the stress-induced dynamics of arterial blood pressure in rats. We propose an approach for association revealed changes with distinct physiological regulatory mechanisms and for quantifying the influence of each mechanism.

  19. Wavelet-Based Real-Time Diagnosis of Complex Systems

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Mackey, Ryan

    2003-01-01

    A new method of robust, autonomous real-time diagnosis of a time-varying complex system (e.g., a spacecraft, an advanced aircraft, or a process-control system) is presented here. It is based upon the characterization and comparison of (1) the execution of software, as reported by discrete data, and (2) data from sensors that monitor the physical state of the system, such as performance sensors or similar quantitative time-varying measurements. By taking account of the relationship between execution of, and the responses to, software commands, this method satisfies a key requirement for robust autonomous diagnosis, namely, ensuring that control is maintained and followed. Such monitoring of control software requires that estimates of the state of the system, as represented within the control software itself, are representative of the physical behavior of the system. In this method, data from sensors and discrete command data are analyzed simultaneously and compared to determine their correlation. If the sensed physical state of the system differs from the software estimate (see figure) or if the system fails to perform a transition as commanded by software, or such a transition occurs without the associated command, the system has experienced a control fault. This method provides a means of detecting such divergent behavior and automatically generating an appropriate warning.

  20. Usefulness of wavelet-based features as global descriptors of VHR satellite images

    NASA Astrophysics Data System (ADS)

    Pyka, Krystian; Drzewiecki, Wojciech; Bernat, Katarzyna; Wawrzaszek, Anna; Krupiński, Michal

    2014-10-01

    In this paper we present the results of research carried out to assess the usefulness of wavelet-based measures of image texture for classification of panchromatic VHR satellite image content. The study is based on images obtained from EROS-A satellite. Wavelet-based features are calculated according to two approaches. In first one the wavelet energy is calculated for each components from every level of decomposition using Haar wavelet. In second one the variance and kurtosis are calculated as mean values of detail components with filters belonging to the D, LA, MB groups of various lengths. The results indicate that both approaches are useful and complement one another. Among the most useful wavelet-based features are present not only those calculated with short or long filters, but also with the filters of intermediate length. Usage of filters of different type and length as well as different statistical parameters (variance, kurtosis) calculated as means for each decomposition level improved the discriminative properties of the feature vector consisted initially of wavelet energies of each component.

  1. Wavelet-based vector quantization for high-fidelity compression and fast transmission of medical images.

    PubMed

    Mitra, S; Yang, S; Kustov, V

    1998-11-01

    Compression of medical images has always been viewed with skepticism, since the loss of information involved is thought to affect diagnostic information. However, recent research indicates that some wavelet-based compression techniques may not effectively reduce the image quality, even when subjected to compression ratios up to 30:1. The performance of a recently designed wavelet-based adaptive vector quantization is compared with a well-known wavelet-based scalar quantization technique to demonstrate the superiority of the former technique at compression ratios higher than 30:1. The use of higher compression with high fidelity of the reconstructed images allows fast transmission of images over the Internet for prompt inspection by radiologists at remote locations in an emergency situation, while higher quality images follow in a progressive manner if desired. Such fast and progressive transmission can also be used for downloading large data sets such as the Visible Human at a quality desired by the users for research or education. This new adaptive vector quantization uses a neural networks-based clustering technique for efficient quantization of the wavelet-decomposed subimages, yielding minimal distortion in the reconstructed images undergoing high compression. Results of compression up to 100:1 are shown for 24-bit color and 8-bit monochrome medical images. PMID:9848058

  2. Application of wavelet-based multiple linear regression model to rainfall forecasting in Australia

    NASA Astrophysics Data System (ADS)

    He, X.; Guan, H.; Zhang, X.; Simmons, C.

    2013-12-01

    In this study, a wavelet-based multiple linear regression model is applied to forecast monthly rainfall in Australia by using monthly historical rainfall data and climate indices as inputs. The wavelet-based model is constructed by incorporating the multi-resolution analysis (MRA) with the discrete wavelet transform and multiple linear regression (MLR) model. The standardized monthly rainfall anomaly and large-scale climate index time series are decomposed using MRA into a certain number of component subseries at different temporal scales. The hierarchical lag relationship between the rainfall anomaly and each potential predictor is identified by cross correlation analysis with a lag time of at least one month at different temporal scales. The components of predictor variables with known lag times are then screened with a stepwise linear regression algorithm to be selectively included into the final forecast model. The MRA-based rainfall forecasting method is examined with 255 stations over Australia, and compared to the traditional multiple linear regression model based on the original time series. The models are trained with data from the 1959-1995 period and then tested in the 1996-2008 period for each station. The performance is compared with observed rainfall values, and evaluated by common statistics of relative absolute error and correlation coefficient. The results show that the wavelet-based regression model provides considerably more accurate monthly rainfall forecasts for all of the selected stations over Australia than the traditional regression model.

  3. Comparative study of different wavelet based neural network models for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

    2014-07-01

    The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

  4. Numerical Modeling of Global Atmospheric Chemical Transport with Wavelet-based Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2012-12-01

    In this work we present a multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of global atmospheric chemical transport problems. An accurate numerical simulation of such problems presents an enormous challenge. Atmospheric Chemical Transport Models (CTMs) combine chemical reactions with meteorologically predicted atmospheric advection and turbulent mixing. The resulting system of multi-scale advection-reaction-diffusion equations is extremely stiff, nonlinear and involves a large number of chemically interacting species. As a consequence, the need for enormous computational resources for solving these equations imposes severe limitations on the spatial resolution of the CTMs implemented on uniform or quasi-uniform grids. In turn, this relatively crude spatial resolution results in significant numerical diffusion introduced into the system. This numerical diffusion is shown to noticeably distort the pollutant mixing and transport dynamics for typically used grid resolutions. The developed WAMR method for numerical modeling of atmospheric chemical evolution equations presented in this work provides a significant reduction in the computational cost, without upsetting numerical accuracy, therefore it addresses the numerical difficulties described above. WAMR method introduces a fine grid in the regions where sharp transitions occur and cruder grid in the regions of smooth solution behavior. Therefore WAMR results in much more accurate solutions than conventional numerical methods implemented on uniform or quasi-uniform grids. The algorithm allows one to provide error estimates of the solution that are used in conjunction with appropriate threshold criteria to adapt the non-uniform grid. The method has been tested for a variety of problems including numerical simulation of traveling pollution plumes. It was shown that pollution plumes in the remote troposphere can propagate as well-defined layered structures for two weeks or more as

  5. Wavelet-based stereo images reconstruction using depth images

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2007-09-01

    It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate

  6. Atmospheric turbulence mitigation using complex wavelet-based fusion.

    PubMed

    Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R

    2013-06-01

    Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359

  7. Wavelets based algorithm for the evaluation of enhanced liver areas

    NASA Astrophysics Data System (ADS)

    Alvarez, Matheus; Rodrigues de Pina, Diana; Giacomini, Guilherme; Gomes Romeiro, Fernando; Barbosa Duarte, Sérgio; Yamashita, Seizo; de Arruda Miranda, José Ricardo

    2014-03-01

    Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. 63 computed tomography (CT) slices from 23 patients were assessed. Noncontrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.

  8. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy. PMID:22410322

  9. Wavelet-based coherence measures of global seismic noise properties

    NASA Astrophysics Data System (ADS)

    Lyubushin, A. A.

    2015-04-01

    The coherent behavior of four parameters characterizing the global field of low-frequency (periods from 2 to 500 min) seismic noise is studied. These parameters include generalized Hurst exponent, multifractal singularity spectrum support width, the normalized entropy of variance, and kurtosis. The analysis is based on the data from 229 broadband stations of GSN, GEOSCOPE, and GEOFON networks for a 17-year period from the beginning of 1997 to the end of 2013. The entire set of stations is subdivided into eight groups, which, taken together, provide full coverage of the Earth. The daily median values of the studied noise parameters are calculated in each group. This procedure yields four 8-dimensional time series with a time step of 1 day with a length of 6209 samples in each scalar component. For each of the four 8-dimensional time series, a multiple correlation measure is estimated, which is based on computing robust canonical correlations for the Haar wavelet coefficients at the first detail level within a moving time window of the length 365 days. These correlation measures for each noise property demonstrate essential increasing starting from 2007 to 2008 which was continued till the end of 2013. Taking into account a well-known phenomenon of noise correlation increasing before catastrophes, this increasing of seismic noise synchronization is interpreted as indicators of the strongest (magnitudes not less than 8.5) earthquakes activation which is observed starting from the Sumatra mega-earthquake of 26 Dec 2004. This synchronization continues growing up to the end of the studied period (2013), which can be interpreted as a probable precursor of the further increase in the intensity of the strongest earthquakes all over the world.

  10. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  11. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications. PMID:19885963

  12. A generalized model for estimating the energy density of invertebrates

    USGS Publications Warehouse

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p < 0.0001), where ED (as J/g wet mass) was estimated from pDM as ED  =  22,960pDM − 174.2. Model evaluation showed that nearly all (98.8%) of the variability between observed and predicted values for invertebrate ED could be attributed to residual error in the model. Regression of observed on predicted values revealed that the 97.5% joint confidence region included the intercept of 0 (−103.0 ± 707.9) and slope of 1 (1.01 ± 0.12). Use of this model requires that only dry and wet mass measurements be obtained, resulting in significant time, sample size, and cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  13. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  14. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  15. Etalon-photometric method for estimation of tissues density at x-ray images

    NASA Astrophysics Data System (ADS)

    Buldakov, Nicolay S.; Buldakova, Tatyana I.; Suyatinov, Sergey I.

    2016-04-01

    The etalon-photometric method for quantitative estimation of physical density of pathological entities is considered. The method consists in using etalon during the registration and estimation of photometric characteristics of objects. The algorithm for estimating of physical density at X-ray images is offered.

  16. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  17. Automatic quantitative analysis of ultrasound tongue contours via wavelet-based functional mixed models.

    PubMed

    Lancia, Leonardo; Rausch, Philip; Morris, Jeffrey S

    2015-02-01

    This paper illustrates the application of wavelet-based functional mixed models to automatic quantification of differences between tongue contours obtained through ultrasound imaging. The reliability of this method is demonstrated through the analysis of tongue positions recorded from a female and a male speaker at the onset of the vowels /a/ and /i/ produced in the context of the consonants /t/ and /k/. The proposed method allows detection of significant differences between configurations of the articulators that are visible in ultrasound images during the production of different speech gestures and is compatible with statistical designs containing both fixed and random terms. PMID:25698047

  18. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  19. Optimal block boundary pre/postfiltering for wavelet-based image and video compression.

    PubMed

    Liang, Jie; Tu, Chengjie; Tran, Trac D

    2005-12-01

    This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. PMID:16370467

  20. Information passage from acoustic impedance to seismogram: Perspectives from wavelet-based multiscale analysis

    NASA Astrophysics Data System (ADS)

    Li, Chun-Feng

    2004-07-01

    Traditional seismic interpretation of surface seismic data is focused primarily on seismic oscillation. Rich singularity information carried by, but deeply buried in, seismic data is often ignored. We show that wavelet-based singularity analysis reveals generic singularity information conducted from acoustic impedance to seismogram. The singularity exponents (known as Hölder exponent α) calculated from seismic data are independent of amplitude and robust to phase changes and noises. These unique properties of α offer potentially important application in many fields, especially in studying seismic data interpretation, processing, inversion, and wave attenuation.

  1. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  2. An economic prediction of refinement coefficients in wavelet-based adaptive methods for electron structure calculations.

    PubMed

    Pipek, János; Nagy, Szilvia

    2013-03-01

    The wave function of a many electron system contains inhomogeneously distributed spatial details, which allows to reduce the number of fine detail wavelets in multiresolution analysis approximations. Finding a method for decimating the unnecessary basis functions plays an essential role in avoiding an exponential increase of computational demand in wavelet-based calculations. We describe an effective prediction algorithm for the next resolution level wavelet coefficients, based on the approximate wave function expanded up to a given level. The prediction results in a reasonable approximation of the wave function and allows to sort out the unnecessary wavelets with a great reliability. PMID:23115109

  3. Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.

    PubMed

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2015-05-01

    Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology. PMID:26236833

  4. A maximum entropy kernel density estimator with applications to function interpolation and texture segmentation

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Nikhil; Schonfeld, Dan

    2006-02-01

    In this paper, we develop a new algorithm to estimate an unknown probability density function given a finite data sample using a tree shaped kernel density estimator. The algorithm formulates an integrated squared error based cost function which minimizes the quadratic divergence between the kernel density and the Parzen density estimate. The cost function reduces to a quadratic programming problem which is minimized within the maximum entropy framework. The maximum entropy principle acts as a regularizer which yields a smooth solution. A smooth density estimate enables better generalization to unseen data and offers distinct advantages in high dimensions and cases where there is limited data. We demonstrate applications of the hierarchical kernel density estimator for function interpolation and texture segmentation problems. When applied to function interpolation, the kernel density estimator improves performance considerably in situations where the posterior conditional density of the dependent variable is multimodal. The kernel density estimator allows flexible non parametric modeling of textures which improves performance in texture segmentation algorithms. We demonstrate performance on a text labeling problem which shows performance of the algorithm in high dimensions. The hierarchical nature of the density estimator enables multiresolution solutions depending on the complexity of the data. The algorithm is fast and has at most quadratic scaling in the number of kernels.

  5. An adaptive wavelet-based deblocking algorithm for MPEG-4 codec

    NASA Astrophysics Data System (ADS)

    Truong, Trieu-Kien; Chen, Shi-Huang; Jhang, Rong-Yi

    2005-08-01

    This paper proposed an adaptive wavelet-based deblocking algorithm for MPEG-4 video coding standard. The novelty of this method is that the deblocking filter uses a wavelet-based threshold to detect and analyze artifacts on coded block boundaries. This threshold value is based on the difference between the wavelet transform coefficients of image blocks and the coefficients of the entire image. Therefore, the threshold value is made adaptive to different images and characteristics of blocking artifacts. Then one can attenuate those artifacts by applying a selected filter based on the above threshold value. It is shown in this paper that the proposed method is robust, fast, and works remarkably well for MPEG-4 codec at low bit rates. Another advantage of the new method is that it retains sharp features in the decoded frames since it only removes artifacts. Experimental results show that the proposed method can achieve a significantly improved visual quality and increase the PSNR in the decoded video frame.

  6. Wavelet-based neural network analysis of internal carotid arterial Doppler signals.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2006-06-01

    In this study, internal carotid arterial Doppler signals recorded from 130 subjects, where 45 of them suffered from internal carotid artery stenosis, 44 of them suffered from internal carotid artery occlusion and the rest of them were healthy subjects, were classified using wavelet-based neural network. Wavelet-based neural network model, employing the multilayer perceptron, was used for analysis of the internal carotid arterial Doppler signals. Multi-layer perceptron neural network (MLPNN) trained with the Levenberg-Marquardt algorithm was used to detect stenosis and occlusion in internal carotid arteries. In order to determine the MLPNN inputs, spectral analysis of the internal carotid arterial Doppler signals was performed using wavelet transform (WT). The MLPNN was trained, cross validated, and tested with training, cross validation, and testing sets, respectively. All these data sets were obtained from internal carotid arteries of healthy subjects, subjects suffering from internal carotid artery stenosis and occlusion. The correct classification rate was 96% for healthy subjects, 96.15% for subjects having internal carotid artery stenosis and 96.30% for subjects having internal carotid artery occlusion. The classification results showed that the MLPNN trained with the Levenberg-Marquardt algorithm was effective to detect internal carotid artery stenosis and occlusion. PMID:16848135

  7. A wavelet-based data pre-processing analysis approach in mass spectrometry.

    PubMed

    Li, Xiaoli; Li, Jin; Yao, Xin

    2007-04-01

    Recently, mass spectrometry analysis has a become an effective and rapid approach in detecting early-stage cancer. To identify proteomic patterns in serum to discriminate cancer patients from normal individuals, machine-learning methods, such as feature selection and classification, have already been involved in the analysis of mass spectrometry (MS) data with some success. However, the performance of existing machine learning methods for MS data analysis still needs improving. The study in this paper proposes a wavelet-based pre-processing approach to MS data analysis. The approach applies wavelet-based transforms to MS data with the aim of de-noising the data that are potentially contaminated in acquisition. The effects of the selection of wavelet function and decomposition level on the de-noising performance have also been investigated in this study. Our comparative experimental results demonstrate that the proposed de-noising pre-processing approach has potentials to remove possible noise embedded in MS data, which can lead to improved performance for existing machine learning methods in cancer detection. PMID:16982045

  8. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    PubMed Central

    G. S., Vijay; H. S., Kumar; Pai P., Srinivasa; N. S., Sriram; Rao, Raj B. K. N.

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal. PMID:23213323

  9. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  10. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  11. EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS1

    PubMed Central

    Mukherjee, Gourab; Johnstone, Iain M.

    2015-01-01

    We consider estimating the predictive density under Kullback–Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates. PMID:26448678

  12. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional

  13. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Arvind; Dorai, Kavita

    2016-09-01

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation.

  14. Estimated global nitrogen deposition using NO2 column density

    USGS Publications Warehouse

    Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao

    2013-01-01

    Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m−2 yr−1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m−2 yr−1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.

  15. Density estimation and population growth of mosquitofish (Gambusia affinis) in rice fields.

    PubMed

    Stewart, R J; Miura, T

    1985-03-01

    Mark-release-recapture estimation of population density was performed with mosquitofish in a rice field habitat. Regression analysis showed a relationship between absolute density and mean number of fish per trap. Trap counts were converted to density estimates with data from several fields and growth curves were calculated to describe seasonal growth of mosquitofish populations at three different initial stocking rates. The calculated curves showed a good correspondence to field populations of mosquitofish. PMID:2906659

  16. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation

  17. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Astrophysics Data System (ADS)

    Godines, Cody R.; Manteufel, Randall D.

    2002-12-01

    This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation

  18. Wavelet-based enhancement for detection of left ventricular myocardial boundaries in magnetic resonance images.

    PubMed

    Fu, J C; Chai, J W; Wong, S T

    2000-11-01

    MRI is noninvasive and generates clear images, giving it great potential as a diagnostic instrument. However, current methods of image analysis are too time-consuming for dynamic systems such as the cardiovascular system. Since dynamic imagery generate a huge number of images, a computer aided machine vision diagnostic tool is essential for implementing MRI-based measurement. In this paper, a wavelet-based image technique is applied to enhance left ventricular endocardial and epicardial profiles as the preprocessor for a dynamic programming-based automatic border detection algorithm. Statistical tests are conducted to verify the performance of the enhancement technique by comparing borders manually drawn with 1. borders generated from the enhanced images, and 2. borders generated for the original images. PMID:11118768

  19. FAST TRACK COMMUNICATION: From cardinal spline wavelet bases to highly coherent dictionaries

    NASA Astrophysics Data System (ADS)

    Andrle, Miroslav; Rebollo-Neira, Laura

    2008-05-01

    Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation.

  20. Conjugate Event Study of Geomagnetic ULF Pulsations with Wavelet-based Indices

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Clauer, C. R.; Kim, H.; Weimer, D. R.; Cai, X.

    2013-12-01

    The interactions between the solar wind and geomagnetic field produce a variety of space weather phenomena, which can impact the advanced technology systems of modern society including, for example, power systems, communication systems, and navigation systems. One type of phenomena is the geomagnetic ULF pulsation observed by ground-based or in-situ satellite measurements. Here, we describe a wavelet-based index and apply it to study the geomagnetic ULF pulsations observed in Antarctica and Greenland magnetometer arrays. The wavelet indices computed from these data show spectrum, correlation, and magnitudes information regarding the geomagnetic pulsations. The results show that the geomagnetic field at conjugate locations responds differently according to the frequency of pulsations. The index is effective for identification of the pulsation events and measures important characteristics of the pulsations. It could be a useful tool for the purpose of monitoring geomagnetic pulsations.

  1. Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis

    PubMed Central

    Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun

    2014-01-01

    To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673

  2. Wavelet-based adaptive numerical simulation of unsteady 3D flow around a bluff body

    NASA Astrophysics Data System (ADS)

    de Stefano, Giuliano; Vasilyev, Oleg

    2012-11-01

    The unsteady three-dimensional flow past a two-dimensional bluff body is numerically simulated using a wavelet-based method. The body is modeled by exploiting the Brinkman volume-penalization method, which results in modifying the governing equations with the addition of an appropriate forcing term inside the spatial region occupied by the obstacle. The volume-penalized incompressible Navier-Stokes equations are numerically solved by means of the adaptive wavelet collocation method, where the non-uniform spatial grid is dynamically adapted to the flow evolution. The combined approach is successfully applied to the simulation of vortex shedding flow behind a stationary prism with square cross-section. The computation is conducted at transitional Reynolds numbers, where fundamental unstable three-dimensional vortical structures exist, by well-predicting the unsteady forces arising from fluid-structure interaction.

  3. Wavelet-based Poisson Solver for use in Particle-In-CellSimulations

    SciTech Connect

    Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.

    2005-05-13

    We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.

  4. A linear quality control design for high efficient wavelet-based ECG data compression.

    PubMed

    Hung, King-Chu; Tsai, Chin-Feng; Ku, Cheng-Tung; Wang, Huan-Sheng

    2009-05-01

    In ECG data compression, maintaining reconstructed signal with desired quality is crucial for clinical application. In this paper, a linear quality control design based on the reversible round-off non-recursive discrete periodized wavelet transform (RRO-NRDPWT) is proposed for high efficient ECG data compression. With the advantages of error propagation resistance and octave coefficient normalization, RRO-NRDPWT enables the non-linear quantization control to obtain an approximately linear distortion by using a single control variable. Based on the linear programming, a linear quantization scale prediction model is presented for the quality control of reconstructed ECG signal. Following the use of the MIT-BIH arrhythmia database, the experimental results show that the proposed system, with lower computational complexity, can obtain much better quality control performance than that of other wavelet-based systems. PMID:19070935

  5. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  6. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462

  7. RADIATION PRESSURE DETECTION AND DENSITY ESTIMATE FOR 2011 MD

    SciTech Connect

    Micheli, Marco; Tholen, David J.; Elliott, Garrett T. E-mail: tholen@ifa.hawaii.edu

    2014-06-10

    We present our astrometric observations of the small near-Earth object 2011 MD (H ∼ 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5σ. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of ρ=(640±330)kg m{sup −3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.

  8. Wavelet-based functional linear mixed models: an application to measurement error–corrected distributed lag models

    PubMed Central

    Malloy, Elizabeth J.; Morris, Jeffrey S.; Adar, Sara D.; Suh, Helen; Gold, Diane R.; Coull, Brent A.

    2010-01-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1–7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants. PMID:20156988

  9. Neuromagnetic correlates of developmental changes in endogenous high-frequency brain oscillations in children: a wavelet-based beamformer study.

    PubMed

    Xiang, Jing; Liu, Yang; Wang, Yingying; Kotecha, Rupesh; Kirtman, Elijah G; Chen, Yangmei; Huo, Xiaolin; Fujiwara, Hisako; Hemasilpin, Nat; DeGrauw, Ton; Rose, Douglas

    2009-06-01

    Recent studies have found that the brain generates very fast oscillations. The objective of the present study was to investigate the spectral, spatial and coherent features of high-frequency brain oscillations in the developing brain. Sixty healthy children and 20 healthy adults were studied using a 275-channel magnetoencephalography (MEG) system. MEG data were digitized at 12,000 Hz. The frequency characteristics of neuromagnetic signals in 0.5-2000 Hz were quantitatively determined with Morlet wavelet transform. The magnetic sources were volumetrically estimated with wavelet-based beamformer at 2.5 mm resolution. The neural networks of endogenous brain oscillations were analyzed with coherent imaging. Neuromagnetic activities in 8-12 Hz and 800-900 Hz were found to be the most reliable frequency bands in healthy children. The neuromagnetic signals were localized in the occipital, temporal and frontal cortices. The activities in the occipital and temporal cortices were strongly correlated in 8-12 Hz but not in 800-900 Hz. In comparison to adults, children had brain oscillations in intermingled frequency bands. Developmental changes in children were identified for both low- and high-frequency brain activities. The results of the present study suggest that the development of the brain is associated with spatial and coherent changes of endogenous brain activities in both low- and high-frequency ranges. Analysis of high-frequency neuromagnetic oscillation may provide novel insights into cerebral mechanisms of brain function. The noninvasive measurement of neuromagnetic brain oscillations in the developing brain may open a new window for analysis of brain function. PMID:19362072

  10. A comparison of 2 techniques for estimating deer density

    USGS Publications Warehouse

    Robbins, C.S.

    1977-01-01

    We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

  11. Estimation of a k-monotone density: characterizations, consistency and minimax lower bounds

    PubMed Central

    Balabdaoui, Fadoua; Wellner, Jon A.

    2010-01-01

    The classes of monotone or convex (and necessarily monotone) densities on ℝ+ can be viewed as special cases of the classes of k-monotone densities on ℝ+. These classes bridge the gap between the classes of monotone (1-monotone) and convex decreasing (2-monotone) densities for which asymptotic results are known, and the class of completely monotone (∞-monotone) densities on ℝ+. In this paper we consider non-parametric maximum likelihood and least squares estimators of a k-monotone density g0.We prove existence of the estimators and give characterizations. We also establish consistency properties, and show that the estimators are splines of degree k − 1 with simple knots. We further provide asymptotic minimax risk lower bounds for estimating the derivativesg0(j)(x0),0=1,…,k−1, at a fixed point x0 under the assumption that (−1)kg0(k)(x0)>0. PMID:20436949

  12. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia

    PubMed Central

    Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  13. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  14. Impact of Building Heights on 3d Urban Density Estimation from Spaceborne Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Peng, Feifei; Gong, Jianya; Wang, Le; Wu, Huayi; Yang, Jiansi

    2016-06-01

    In urban planning and design applications, visualization of built up areas in three dimensions (3D) is critical for understanding building density, but the accurate building heights required for 3D density calculation are not always available. To solve this problem, spaceborne stereo imagery is often used to estimate building heights; however estimated building heights might include errors. These errors vary between local areas within a study area and related to the heights of the building themselves, distorting 3D density estimation. The impact of building height accuracy on 3D density estimation must be determined across and within a study area. In our research, accurate planar information from city authorities is used during 3D density estimation as reference data, to avoid the errors inherent to planar information extracted from remotely sensed imagery. Our experimental results show that underestimation of building heights is correlated to underestimation of the Floor Area Ratio (FAR). In local areas, experimental results show that land use blocks with low FAR values often have small errors due to small building height errors for low buildings in the blocks; and blocks with high FAR values often have large errors due to large building height errors for high buildings in the blocks. Our study reveals that the accuracy of 3D density estimated from spaceborne stereo imagery is correlated to heights of buildings in a scene; therefore building heights must be considered when spaceborne stereo imagery is used to estimate 3D density to improve precision.

  15. In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...

  16. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  17. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  18. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  19. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  20. Statistical Analysis of Photopyroelectric Signals using Histogram and Kernel Density Estimation for differentiation of Maize Seeds

    NASA Astrophysics Data System (ADS)

    Rojas-Lima, J. E.; Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2016-09-01

    Considering the necessity of photothermal alternative approaches for characterizing nonhomogeneous materials like maize seeds, the objective of this research work was to analyze statistically the amplitude variations of photopyroelectric signals, by means of nonparametric techniques such as the histogram and the kernel density estimator, and the probability density function of the amplitude variations of two genotypes of maize seeds with different pigmentations and structural components: crystalline and floury. To determine if the probability density function had a known parametric form, the histogram was determined which did not present a known parametric form, so the kernel density estimator using the Gaussian kernel, with an efficiency of 95 % in density estimation, was used to obtain the probability density function. The results obtained indicated that maize seeds could be differentiated in terms of the statistical values for floury and crystalline seeds such as the mean (93.11, 159.21), variance (1.64× 103, 1.48× 103), and standard deviation (40.54, 38.47) obtained from the amplitude variations of photopyroelectric signals in the case of the histogram approach. For the case of the kernel density estimator, seeds can be differentiated in terms of kernel bandwidth or smoothing constant h of 9.85 and 6.09 for floury and crystalline seeds, respectively.

  1. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  2. Sea ice density estimation in the Bohai Sea using the hyperspectral remote sensing technology

    NASA Astrophysics Data System (ADS)

    Liu, Chengyu; Shao, Honglan; Xie, Feng; Wang, Jianyu

    2014-11-01

    Sea ice density is one of the significant physical properties of sea ice and the input parameters in the estimation of the engineering mechanical strength and aerodynamic drag coefficients; also it is an important indicator of the ice age. The sea ice in the Bohai Sea is a solid, liquid and gas-phase mixture composed of pure ice, brine pockets and bubbles, the density of which is mainly affected by the amount of brine pockets and bubbles. The more the contained brine pockets, the greater the sea ice density; the more the contained bubbles, the smaller the sea ice density. The reflectance spectrum in 350~2500 nm and density of sea ice of different thickness and ages were measured in the Liaodong Bay of the Bohai Sea during the glacial maximum in the winter of 2012-2013. According to the measured sea ice density and reflectance spectrum, the characteristic bands that can reflect the sea ice density variation were found, and the sea ice density spectrum index (SIDSI) of the sea ice in the Bohai Sea was constructed. The inversion model of sea ice density in the Bohai Sea which refers to the layer from surface to the depth of penetration by the light was proposed at last. The sea ice density in the Bohai Sea was estimated using the proposed model from Hyperion image which is a hyperspectral image. The results show that the error of the sea ice density inversion model is about 0.0004 g•cm-3. The sea ice density can be estimated through hyperspectral remote sensing images, which provide the data support to the related marine science research and application.

  3. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect

    PubMed Central

    Ku, Bon Ki; Evans, Douglas E.

    2015-01-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of

  4. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  5. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. PMID:26774422

  6. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions.

    PubMed

    Daqrouq, K; Dobaie, A

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412

  7. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  8. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  9. A real-time wavelet-based video decoder using SIMD technology

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin

    2008-02-01

    This paper presents a fast implementation of a wavelet-based video codec. The codec consists of motion-compensated temporal filtering (MCTF), 2-D spatial wavelet transform, and SPIHT for wavelet coefficient coding. It offers compression efficiency that is competitive to H.264. The codec is implemented in software running on a general purpose PC, using C programming language and streaming SIMD extensions intrinsics, without assembly language. This high-level software implementation allows the codec to be portable to other general-purpose computing platforms. Testing with a Pentium 4 HT at 3.6GHz (running under Linux and using the GCC compiler, version 4), shows that the software decoder is able to decode 4CIF video in real-time, over 2 times faster than software written only in C language. This paper describes the structure of the codec, the fast algorithms chosen for the most computationally intensive elements in the codec, and the use of SIMD to implement these algorithms.

  10. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    PubMed Central

    Sappa, Angel D.; Carvajal, Juan A.; Aguilera, Cristhian A.; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X.

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  11. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study.

    PubMed

    Sappa, Angel D; Carvajal, Juan A; Aguilera, Cristhian A; Oliveira, Miguel; Romero, Dennis; Vintimilla, Boris X

    2016-01-01

    This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR). PMID:27294938

  12. Finding the multipath propagation of multivariable crude oil prices using a wavelet-based network approach

    NASA Astrophysics Data System (ADS)

    Jia, Xiaoliang; An, Haizhong; Sun, Xiaoqi; Huang, Xuan; Gao, Xiangyun

    2016-04-01

    The globalization and regionalization of crude oil trade inevitably give rise to the difference of crude oil prices. The understanding of the pattern of the crude oil prices' mutual propagation is essential for analyzing the development of global oil trade. Previous research has focused mainly on the fuzzy long- or short-term one-to-one propagation of bivariate oil prices, generally ignoring various patterns of periodical multivariate propagation. This study presents a wavelet-based network approach to help uncover the multipath propagation of multivariable crude oil prices in a joint time-frequency period. The weekly oil spot prices of the OPEC member states from June 1999 to March 2011 are adopted as the sample data. First, we used wavelet analysis to find different subseries based on an optimal decomposing scale to describe the periodical feature of the original oil price time series. Second, a complex network model was constructed based on an optimal threshold selection to describe the structural feature of multivariable oil prices. Third, Bayesian network analysis (BNA) was conducted to find the probability causal relationship based on periodical structural features to describe the various patterns of periodical multivariable propagation. Finally, the significance of the leading and intermediary oil prices is discussed. These findings are beneficial for the implementation of periodical target-oriented pricing policies and investment strategies.

  13. A wavelet-based image quality metric for the assessment of 3D synthesized views

    NASA Astrophysics Data System (ADS)

    Bosc, Emilie; Battisti, Federica; Carli, Marco; Le Callet, Patrick

    2013-03-01

    In this paper we present a novel image quality assessment technique for evaluating virtual synthesized views in the context of multi-view video. In particular, Free Viewpoint Videos are generated from uncompressed color views and their compressed associated depth maps by means of the View Synthesis Reference Software, provided by MPEG. Prior to the synthesis step, the original depth maps are encoded with different coding algorithms thus leading to the creation of additional artifacts in the synthesized views. The core of proposed wavelet-based metric is in the registration procedure performed to align the synthesized view and the original one, and in the skin detection that has been applied considering that the same distortion is more annoying if visible on human subjects rather than on other parts of the scene. The effectiveness of the metric is evaluated by analyzing the correlation of the scores obtained with the proposed metric with Mean Opinion Scores collected by means of subjective tests. The achieved results are also compared against those of well known objective quality metrics. The experimental results confirm the effectiveness of the proposed metric.

  14. Online Epileptic Seizure Prediction Using Wavelet-Based Bi-Phase Correlation of Electrical Signals Tomography.

    PubMed

    Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh

    2015-09-01

    Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients. PMID:26126613

  15. Wavelet-Based ECG Steganography for Protecting Patient Confidential Information in Point-of-Care Systems.

    PubMed

    Ibaida, Ayman; Khalil, Ibrahim

    2013-12-01

    With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as point-of-care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by body sensor networks from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level, etc., and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data are being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet-based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the percentage residual difference and the wavelet weighted PRD. It is found that the proposed technique provides high-security protection for patients data with low (less than 1%) distortion and ECG data remain diagnosable after watermarking (i.e., hiding patient confidential data) and as well as after watermarks (i.e., hidden data) are removed from the watermarked data. PMID:23708767

  16. Application of wavelet-based neural network on DNA microarray data.

    PubMed

    Lee, Jack; Zee, Benny

    2008-01-01

    The advantage of using DNA microarray data when investigating human cancer gene expressions is its ability to generate enormous amount of information from a single assay in order to speed up the scientific evaluation process. The number of variables from the gene expression data coupled with comparably much less number of samples creates new challenges to scientists and statisticians. In particular, the problems include enormous degree of collinearity among genes expressions, likely violation of model assumptions as well as high level of noise with potential outliers. To deal with these problems, we propose a block wavelet shrinkage principal component (BWSPCA) analysis method to optimize the information during the noise reduction process. This paper firstly uses the National Cancer Institute database (NC160) as an illustration and shows a significant improvement in dimension reduction. Secondly we combine BWSPCA with an artificial neural network-based gene minimization strategy to establish a Block Wavelet-based Neural Network model in a robust and accurate cancer classification process (BWNN). Our extensive experiments on six public cancer datasets have shown that the method of BWNN for tumor classification performed well, especially on some difficult instances with large-class (more than two) expression data. This proposed method is extremely useful for data denoising and is competitiveness with respect to other methods such as BagBoost, RandomForest (RanFor), Support Vector Machines (SVM), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN). PMID:19255638

  17. Radiation dose reduction in digital radiography using wavelet-based image processing methods

    NASA Astrophysics Data System (ADS)

    Watanabe, Haruyuki; Tsai, Du-Yih; Lee, Yongbum; Matsuyama, Eri; Kojima, Katsuyuki

    2011-03-01

    In this paper, we investigate the effect of the use of wavelet transform for image processing on radiation dose reduction in computed radiography (CR), by measuring various physical characteristics of the wavelet-transformed images. Moreover, we propose a wavelet-based method for offering a possibility to reduce radiation dose while maintaining a clinically acceptable image quality. The proposed method integrates the advantages of a previously proposed technique, i.e., sigmoid-type transfer curve for wavelet coefficient weighting adjustment technique, as well as a wavelet soft-thresholding technique. The former can improve contrast and spatial resolution of CR images, the latter is able to improve the performance of image noise. In the investigation of physical characteristics, modulation transfer function, noise power spectrum, and contrast-to-noise ratio of CR images processed by the proposed method and other different methods were measured and compared. Furthermore, visual evaluation was performed using Scheffe's pair comparison method. Experimental results showed that the proposed method could improve overall image quality as compared to other methods. Our visual evaluation showed that an approximately 40% reduction in exposure dose might be achieved in hip joint radiography by using the proposed method.

  18. Wavelet Based Method for Congestive Heart Failure Recognition by Three Confirmation Functions

    PubMed Central

    Daqrouq, K.; Dobaie, A.

    2016-01-01

    An investigation of the electrocardiogram (ECG) signals and arrhythmia characterization by wavelet energy is proposed. This study employs a wavelet based feature extraction method for congestive heart failure (CHF) obtained from the percentage energy (PE) of terminal wavelet packet transform (WPT) subsignals. In addition, the average framing percentage energy (AFE) technique is proposed, termed WAFE. A new classification method is introduced by three confirmation functions. The confirmation methods are based on three concepts: percentage root mean square difference error (PRD), logarithmic difference signal ratio (LDSR), and correlation coefficient (CC). The proposed method showed to be a potential effective discriminator in recognizing such clinical syndrome. ECG signals taken from MIT-BIH arrhythmia dataset and other databases are utilized to analyze different arrhythmias and normal ECGs. Several known methods were studied for comparison. The best recognition rate selection obtained was for WAFE. The recognition performance was accomplished as 92.60% accurate. The Receiver Operating Characteristic curve as a common tool for evaluating the diagnostic accuracy was illustrated, which indicated that the tests are reliable. The performance of the presented system was investigated in additive white Gaussian noise (AWGN) environment, where the recognition rate was 81.48% for 5 dB. PMID:26949412

  19. Image-based scene representation using wavelet-based interval morphing

    NASA Astrophysics Data System (ADS)

    Bao, Paul; Xu, Dan

    1999-07-01

    Scene appearance for a continuous range of viewpoint can be represented by a discrete set of images via image morphing. In this paper, we present a new robust image morphing scheme based on 2D wavelet transform and interval field interpolation. Traditional mesh-base and field-based morphing algorithms, usually designed in the spatial image space, suffer from very high time complexity and therefore make themselves impractical in real-time virtual environment applications. Compared with traditional morphing methods, the proposed wavelet-based interval morphing scheme performs interval interpolation in both the frequency and spatial spaces. First, the images of the scene can be significantly compressed in the frequency domain with little degradation in visual quality and therefore the complexity of the scene can be significantly reduced. Second, since a feature point in the image may correspond to a neighborhood in a subband image in the wavelet domain, we define feature interval for the wavelet-transformed images for an accurate feature matching between the morphing images. Based on the feature intervals, we employ the interval field interpolation to morph the images progressively in a coarse-to-fine process. Finally, we use a post-warping procedure to transform the interpolated views to its desired position. A nice future of using wavelet transformation is its multiresolution representation mode, which enables the progressive morphing of scene.

  20. Selective error detection for error-resilient wavelet-based image coding.

    PubMed

    Karam, Lina J; Lam, Tuyet-Trang

    2007-12-01

    This paper introduces the concept of a similarity check function for error-resilient multimedia data transmission. The proposed similarity check function provides information about the effects of corrupted data on the quality of the reconstructed image. The degree of data corruption is measured by the similarity check function at the receiver, without explicit knowledge of the original source data. The design of a perceptual similarity check function is presented for wavelet-based coders such as the JPEG2000 standard, and used with a proposed "progressive similarity-based ARQ" (ProS-ARQ) scheme to significantly decrease the retransmission rate of corrupted data while maintaining very good visual quality of images transmitted over noisy channels. Simulation results with JPEG2000-coded images transmitted over the Binary Symmetric Channel, show that the proposed ProS-ARQ scheme significantly reduces the number of retransmissions as compared to conventional ARQ-based schemes. The presented results also show that, for the same number of retransmitted data packets, the proposed ProS-ARQ scheme can achieve significantly higher PSNR and better visual quality as compared to the selective-repeat ARQ scheme. PMID:18092593

  1. Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis

    PubMed Central

    Gerasimova, Evgeniya; Audit, Benjamin; Roux, Stephane G.; Khalil, André; Gileva, Olga; Argoul, Françoise; Naimark, Oleg; Arneodo, Alain

    2014-01-01

    Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development. PMID:24860510

  2. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    PubMed

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824

  3. Fourier-, Hilbert- and wavelet-based signal analysis: are they really different approaches?

    PubMed

    Bruns, Andreas

    2004-08-30

    Spectral signal analysis constitutes one of the most important and most commonly used analytical tools for the evaluation of neurophysiological signals. It is not only the spectral parameters per se (amplitude and phase) which are of interest, but there is also a variety of measures derived from them, including important coupling measures like coherence or phase synchrony. After reviewing some of these measures in order to underline the widespread relevance of spectral analysis, this report compares the three classical spectral analysis approaches: Fourier, Hilbert and wavelet transform. Recently, there seems to be increasing acceptance of the notion that Hilbert- or wavelet-based analyses be in some way superior to Fourier-based analyses. The present article counters such views by demonstrating that the three techniques are in fact formally (i.e. mathematically) equivalent when using the class of wavelets that is typically applied in spectral analyses. Moreover, spectral amplitude serves as an example to show that Fourier, Hilbert and wavelet analysis also yield equivalent results in practical applications to neuronal signals. PMID:15262077

  4. Wavelet-based detection of abrupt changes in natural frequencies of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, K.; Staszewski, W. J.; Basu, B.; Uhl, T.

    2015-12-01

    Detection of abrupt changes in natural frequencies from vibration responses of time-variant systems is a challenging task due to the complex nature of physics involved. It is clear that the problem needs to be analysed in the combined time-frequency domain. The paper proposes an application of the input-output wavelet-based Frequency Response Function for this analysis. The major focus and challenge relate to ridge extraction of the above time-frequency characteristics. It is well known that classical ridge extraction procedures lead to ridges that are smooth. However, this property is not desired when abrupt changes in the dynamics are considered. The methods presented in the paper are illustrated using simulated and experimental multi-degree-of-freedom systems. The results are compared with the classical Frequency Response Function and with the output only analysis based on the wavelet auto-power response spectrum. The results show that the proposed method captures correctly the dynamics of the analysed time-variant systems.

  5. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  6. Segmentation of complementary DNA microarray images by wavelet-based Markov random field model.

    PubMed

    Athanasiadis, Emmanouil I; Cavouras, Dionisis A; Glotsos, Dimitris Th; Georgiadis, Pantelis V; Kalatzis, Ioannis K; Nikiforidis, George C

    2009-11-01

    A wavelet-based modification of the Markov random field (WMRF) model is proposed for segmenting complementary DNA (cDNA) microarray images. For evaluation purposes, five simulated and a set of five real microarray images were used. The one-level stationary wavelet transform (SWT) of each microarray image was used to form two images, a denoised image, using hard thresholding filter, and a magnitude image, from the amplitudes of the horizontal and vertical components of SWT. Elements from these two images were suitably combined to form the WMRF model for segmenting spots from their background. The WMRF was compared against the conventional MRF and the Fuzzy C means (FCM) algorithms on simulated and real microarray images and their performances were evaluated by means of the segmentation matching factor (SMF) and the coefficient of determination (r2). Additionally, the WMRF was compared against the SPOT and SCANALYZE, and performances were evaluated by the mean absolute error (MAE) and the coefficient of variation (CV). The WMRF performed more accurately than the MRF and FCM (SMF: 92.66, 92.15, and 89.22, r2 : 0.92, 0.90, and 0.84, respectively) and achieved higher reproducibility than the MRF, SPOT, and SCANALYZE (MAE: 497, 1215, 1180, and 503, CV: 0.88, 1.15, 0.93, and 0.90, respectively). PMID:19783509

  7. A wavelet-based approach to detecting liveness in fingerprint scanners

    NASA Astrophysics Data System (ADS)

    Abhyankar, Aditya S.; Schuckers, Stephanie C.

    2004-08-01

    In this work, a method to provide fingerprint vitality authentication, in order to improve vulnerability of fingerprint identification systems to spoofing is introduced. The method aims at detecting 'liveness' in fingerprint scanners by using the physiological phenomenon of perspiration. A wavelet based approach is used which concentrates on the changing coefficients using the zoom-in property of the wavelets. Multiresolution analysis and wavelet packet analysis are used to extract information from low frequency and high frequency content of the images respectively. Daubechies wavelet is designed and implemented to perform the wavelet analysis. A threshold is applied to the first difference of the information in all the sub-bands. The energy content of the changing coefficients is used as a quantified measure to perform the desired classification, as they reflect a perspiration pattern. A data set of approximately 30 live, 30 spoof, and 14 cadaver fingerprint images was divided with first half as a training data while the other half as the testing data. The proposed algorithm was applied to the training data set and was able to completely classify 'live' fingers from 'not live' fingers, thus providing a method for enhanced security and improved spoof protection.

  8. Performance evaluation of wavelet-based face verification on a PDA recorded database

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2006-05-01

    The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.

  9. Incipient interturn fault diagnosis in induction machines using an analytic wavelet-based optimized Bayesian inference.

    PubMed

    Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan

    2014-05-01

    Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044

  10. The Analysis of Surface EMG Signals with the Wavelet-Based Correlation Dimension Method

    PubMed Central

    Zhang, Yanyan; Wang, Jue

    2014-01-01

    Many attempts have been made to effectively improve a prosthetic system controlled by the classification of surface electromyographic (SEMG) signals. Recently, the development of methodologies to extract the effective features still remains a primary challenge. Previous studies have demonstrated that the SEMG signals have nonlinear characteristics. In this study, by combining the nonlinear time series analysis and the time-frequency domain methods, we proposed the wavelet-based correlation dimension method to extract the effective features of SEMG signals. The SEMG signals were firstly analyzed by the wavelet transform and the correlation dimension was calculated to obtain the features of the SEMG signals. Then, these features were used as the input vectors of a Gustafson-Kessel clustering classifier to discriminate four types of forearm movements. Our results showed that there are four separate clusters corresponding to different forearm movements at the third resolution level and the resulting classification accuracy was 100%, when two channels of SEMG signals were used. This indicates that the proposed approach can provide important insight into the nonlinear characteristics and the time-frequency domain features of SEMG signals and is suitable for classifying different types of forearm movements. By comparing with other existing methods, the proposed method exhibited more robustness and higher classification accuracy. PMID:24868240

  11. Estimation of mechanical properties of panels based on modal density and mean mobility measurements

    NASA Astrophysics Data System (ADS)

    Elie, Benjamin; Gautier, François; David, Bertrand

    2013-11-01

    The mechanical characteristics of wood panels used by instrument makers are related to numerous factors, including the nature of the wood or characteristic of the wood sample (direction of fibers, micro-structure nature). This leads to variations in Young's modulus, the mass density, and the damping coefficients. Existing methods for estimating these parameters are not suitable for instrument makers, mainly because of the need of expensive experimental setups, or complicated protocols, which are not adapted to a daily practice in a workshop. In this paper, a method for estimating Young's modulus, the mass density, and the modal loss factors of flat panels, requiring a few measurement points and an affordable experimental setup, is presented. It is based on the estimation of two characteristic quantities: the modal density and the mean mobility. The modal density is computed from the values of the modal frequencies estimated by the subspace method ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques), associated with the signal enumeration technique ESTER (ESTimation of ERror). This modal identification technique is proved to be robust in the low- and the mid-frequency domains, i.e. when the modal overlap factor does not exceed 1. The estimation of the modal parameters also enables the computation of the modal loss factor in the low- and the mid-frequency domains. An experimental fit with the theoretical expressions for the modal density and the mean mobility enables an accurate estimation of Young's modulus and the mass density of flat panels. A numerical and an experimental study show that the method is robust, and that it requires solely a few measurement points.

  12. Estimating bulk density of compacted grains in storage bins and modifications of Janssen's load equations as affected by bulk density.

    PubMed

    Haque, Ekramul

    2013-03-01

    Janssen created a classical theory based on calculus to estimate static vertical and horizontal pressures within beds of bulk corn. Even today, his equations are widely used to calculate static loadings imposed by granular materials stored in bins. Many standards such as American Concrete Institute (ACI) 313, American Society of Agricultural and Biological Engineers EP 433, German DIN 1055, Canadian Farm Building Code (CFBC), European Code (ENV 1991-4), and Australian Code AS 3774 incorporated Janssen's equations as the standards for static load calculations on bins. One of the main drawbacks of Janssen's equations is the assumption that the bulk density of the stored product remains constant throughout the entire bin. While for all practical purposes, this is true for small bins; in modern commercial-size bins, bulk density of grains substantially increases due to compressive and hoop stresses. Over pressure factors are applied to Janssen loadings to satisfy practical situations such as dynamic loads due to bin filling and emptying, but there are limited theoretical methods available that include the effects of increased bulk density on the loadings of grain transmitted to the storage structures. This article develops a mathematical equation relating the specific weight as a function of location and other variables of materials and storage. It was found that the bulk density of stored granular materials increased with the depth according to a mathematical equation relating the two variables, and applying this bulk-density function, Janssen's equations for vertical and horizontal pressures were modified as presented in this article. The validity of this specific weight function was tested by using the principles of mathematics. As expected, calculations of loads based on the modified equations were consistently higher than the Janssen loadings based on noncompacted bulk densities for all grain depths and types accounting for the effects of increased bulk densities

  13. Estimating population density and connectivity of American mink using spatial capture-recapture.

    PubMed

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species. PMID:27509753

  14. Estimating population density and connectivity of American mink using spatial capture-recapture

    USGS Publications Warehouse

    Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.

    2016-01-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  15. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  16. An analytic model of toroidal half-wave oscillations: Implication on plasma density estimates

    NASA Astrophysics Data System (ADS)

    Bulusu, Jayashree; Sinha, A. K.; Vichare, Geeta

    2015-06-01

    The developed analytic model for toroidal oscillations under infinitely conducting ionosphere ("Rigid-end") has been extended to "Free-end" case when the conjugate ionospheres are infinitely resistive. The present direct analytic model (DAM) is the only analytic model that provides the field line structures of electric and magnetic field oscillations associated with the "Free-end" toroidal wave for generalized plasma distribution characterized by the power law ρ = ρo(ro/r)m, where m is the density index and r is the geocentric distance to the position of interest on the field line. This is important because different regions in the magnetosphere are characterized by different m. Significant improvement over standard WKB solution and an excellent agreement with the numerical exact solution (NES) affirms validity and advancement of DAM. In addition, we estimate the equatorial ion number density (assuming H+ atom as the only species) using DAM, NES, and standard WKB for Rigid-end as well as Free-end case and illustrate their respective implications in computing ion number density. It is seen that WKB method overestimates the equatorial ion density under Rigid-end condition and underestimates the same under Free-end condition. The density estimates through DAM are far more accurate than those computed through WKB. The earlier analytic estimates of ion number density were restricted to m = 6, whereas DAM can account for generalized m while reproducing the density for m = 6 as envisaged by earlier models.

  17. Variability of dental cone beam CT grey values for density estimations

    PubMed Central

    Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K

    2013-01-01

    Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537

  18. Estimation of tiger densities in India using photographic captures and recaptures

    USGS Publications Warehouse

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  19. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  20. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  1. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651

  2. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level. PMID:26915200

  3. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  4. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information

  5. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  6. A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder

    NASA Astrophysics Data System (ADS)

    Schroth-Miller, Madeline L.

    Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.

  7. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    PubMed

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  8. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid

    PubMed Central

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  9. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.

    2009-01-01

    1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.

  10. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

  11. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  12. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    PubMed

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  13. An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing

    PubMed Central

    Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R.

    2014-01-01

    A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

  14. Wavelet-Based Spatial Scaling of Coupled Reaction-Diffusion Fields

    SciTech Connect

    Mishra, Sudib; Muralidharan, Krishna; Deymier, Pierre; Frantziskonis, G.; Pannala, Sreekanth; Simunovic, Srdjan

    2008-01-01

    Multiscale schemes for transferring information from fine to coarse scales are typically based on homogenization techniques. Such schemes smooth the fine scale features of the underlying fields, often resulting in the inability to accurately retain the fine scale correlations. In addition, higher-order statistical moments (beyond mean) of the relevant field variables are not necessarily preserved. As a superior alternative to averaging homogenization methods, a wavelet-based scheme for the exchange of information between a reactive and diffusive field in the context of multiscale reaction-diffusion problems is proposed and analyzed. The scheme is shown to be efficient in passing information along scales, from fine to coarse, i.e., upscaling as well as from coarse to fine, i.e., downscaling. It incorporates fine scale statistics (higher-order moments beyond mean), mainly due to the capability of wavelets to represent fields hierarchically. Critical to the success of the scheme is the identification of dominant scales containing the majority of the useful information. The dominant scales in effect specify the coarsest resolution possible. The scheme is applied in detail to the analysis of a diffusive system with a chemically reacting boundary. Reactions are simulated using kinetic Monte Carlo (kMC) and diffusion is solved by finite differences (FDs). Spatial scale differences are present at the interface of the kMC sites and the diffusion grid. The computational efficiency of the scheme is compared to results obtained by averaging homogenization, and to results from a benchmark scheme that ensures spatial scale parity between kMC and FD.

  15. A wavelet-based neural model to optimize and read out a temporal population code

    PubMed Central

    Luvizotto, Andre; Rennó-Costa, César; Verschure, Paul F. M. J.

    2012-01-01

    wavelet-based decoders. PMID:22563314

  16. An Undecimated Wavelet-based Method for Cochlear Implant Speech Processing.

    PubMed

    Hajiaghababa, Fatemeh; Kermani, Saeed; Marateb, Hamid R

    2014-10-01

    A cochlear implant is an implanted electronic device used to provide a sensation of hearing to a person who is hard of hearing. The cochlear implant is often referred to as a bionic ear. This paper presents an undecimated wavelet-based speech coding strategy for cochlear implants, which gives a novel speech processing strategy. The undecimated wavelet packet transform (UWPT) is computed like the wavelet packet transform except that it does not down-sample the output at each level. The speech data used for the current study consists of 30 consonants, sampled at 16 kbps. The performance of our proposed UWPT method was compared to that of infinite impulse response (IIR) filter in terms of mean opinion score (MOS), short-time objective intelligibility (STOI) measure and segmental signal-to-noise ratio (SNR). Undecimated wavelet had better segmental SNR in about 96% of the input speech data. The MOS of the proposed method was twice in comparison with that of the IIR filter-bank. The statistical analysis revealed that the UWT-based N-of-M strategy significantly improved the MOS, STOI and segmental SNR (P < 0.001) compared with what obtained with the IIR filter-bank based strategies. The advantage of UWPT is that it is shift-invariant which gives a dense approximation to continuous wavelet transform. Thus, the information loss is minimal and that is why the UWPT performance was better than that of traditional filter-bank strategies in speech recognition tests. Results showed that the UWPT could be a promising method for speech coding in cochlear implants, although its computational complexity is higher than that of traditional filter-banks. PMID:25426428

  17. Wavelet-based clustering of resting state MRI data in the rat.

    PubMed

    Medda, Alessio; Hoffmann, Lukas; Magnuson, Matthew; Thompson, Garth; Pan, Wen-Ju; Keilholz, Shella

    2016-01-01

    While functional connectivity has typically been calculated over the entire length of the scan (5-10min), interest has been growing in dynamic analysis methods that can detect changes in connectivity on the order of cognitive processes (seconds). Previous work with sliding window correlation has shown that changes in functional connectivity can be observed on these time scales in the awake human and in anesthetized animals. This exciting advance creates a need for improved approaches to characterize dynamic functional networks in the brain. Previous studies were performed using sliding window analysis on regions of interest defined based on anatomy or obtained from traditional steady-state analysis methods. The parcellation of the brain may therefore be suboptimal, and the characteristics of the time-varying connectivity between regions are dependent upon the length of the sliding window chosen. This manuscript describes an algorithm based on wavelet decomposition that allows data-driven clustering of voxels into functional regions based on temporal and spectral properties. Previous work has shown that different networks have characteristic frequency fingerprints, and the use of wavelets ensures that both the frequency and the timing of the BOLD fluctuations are considered during the clustering process. The method was applied to resting state data acquired from anesthetized rats, and the resulting clusters agreed well with known anatomical areas. Clusters were highly reproducible across subjects. Wavelet cross-correlation values between clusters from a single scan were significantly higher than the values from randomly matched clusters that shared no temporal information, indicating that wavelet-based analysis is sensitive to the relationship between areas. PMID:26481903

  18. Estimation of localized current anomalies in polymer electrolyte fuel cells from magnetic flux density measurements

    NASA Astrophysics Data System (ADS)

    Nara, Takaaki; Koike, Masanori; Ando, Shigeru; Gotoh, Yuji; Izumi, Masaaki

    2016-05-01

    In this paper, we propose novel inversion methods to estimate defects or localized current anomalies in membrane electrode assemblies (MEAs) in polymer electrolyte fuel cells (PEFCs). One method is an imaging approach with L1-norm regularization that is suitable for estimation of focal anomalies compared to Tikhonov regularization. The second is a complex analysis based method in which multiple pointwise current anomalies can be identified directly and algebraically from the measured magnetic flux density.

  19. Estimating probability densities from short samples: A parametric maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Dudok de Wit, T.; Floriani, E.

    1998-10-01

    A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.

  20. Estimation of current density distribution of PAFC by analysis of cell exhaust gas

    SciTech Connect

    Kato, S.; Seya, A.; Asano, A.

    1996-12-31

    To estimate distributions of Current densities, voltages, gas concentrations, etc., in phosphoric acid fuel cell (PAFC) stacks, is very important for getting fuel cells with higher quality. In this work, we leave developed a numerical simulation tool to map out the distribution in a PAFC stack. And especially to Study Current density distribution in the reaction area of the cell, we analyzed gas composition in several positions inside a gas outlet manifold of the PAFC stack. Comparing these measured data with calculated data, the current density distribution in a cell plane calculated by the simulation, was certified.

  1. Estimation of Density-Dependent Mortality of Juvenile Bivalves in the Wadden Sea

    PubMed Central

    Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

    2014-01-01

    We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

  2. Estimation of density-dependent mortality of juvenile bivalves in the Wadden Sea.

    PubMed

    Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap

    2014-01-01

    We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293

  3. Identification of the monitoring point density needed to reliably estimate contaminant mass fluxes

    NASA Astrophysics Data System (ADS)

    Liedl, R.; Liu, S.; Fraser, M.; Barker, J.

    2005-12-01

    Plume monitoring frequently relies on the evaluation of point-scale measurements of concentration at observation wells which are located at control planes or `fences' perpendicular to groundwater flow. Depth-specific concentration values are used to estimate the total mass flux of individual contaminants through the fence. Results of this approach, which is based on spatial interpolation, obviously depend on the density of the measurement points. Our contribution relates the accurracy of mass flux estimation to the point density and, in particular, allows to identify a minimum point density needed to achieve a specified accurracy. In order to establish this relationship, concentration data from fences installed in the coal tar creosote plume at the Borden site are used. These fences are characterized by a rather high density of about 7 points/m2 and it is reasonable to assume that the true mass flux is obtained with this point density. This mass flux is then compared with results for less dense grids down to about 0.1points/m2. Mass flux estimates obtained for this range of point densities are analyzed by the moving window method in order to reduce purely random fluctuations. For each position of the moving window the mass flux is estimated and the coefficient of variation (CV) is calculated to quantify variablity of the results. Thus, the CV provides a relative measure of accurracy in the estimated fluxes. By applying this approach to the Borden naphthalene plume at different times, it is found that the point density changes from sufficient to insufficient due to the temporally decreasing mass flux. By comparing the results of naphthalene and phenol at the same fence and at the same time, we can see that the same grid density might be sufficient for one compound but not for another. If a rather strict CV criterion of 5% is used, a grid of 7 points/m2 is shown to allow for reliable estimates of the true mass fluxes only in the beginning of plume development when

  4. Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini

    2014-03-01

    Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.

  5. Estimating abundance and density of Amur tigers along the Sino-Russian border.

    PubMed

    Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping

    2016-07-01

    As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China. PMID:27136188

  6. Estimation of dispersion parameters from photographic density measurements on smoke puffs

    NASA Astrophysics Data System (ADS)

    Yassky, D.

    An extension is proposed of methods that use "optical boundaries" of smoke-plumes in order to estimate atmospheric dispersion parameters. Use is made here of some properties of photographic optics and concentration distributions of light absorbing puffs having no multiple scattering. An array of relative photometric densities, measured on a single photograph of a puff, is shown to be of use in numerical estimation of a puff's dispersive parameters. The proposed method's performance is evaluated by means of computer simulation which includes estimates of the influence of photogrammetric and photometric errors. Future experimental validation of the proposed method may introduce fast and inexpensive ways of obtaining extensive atmospheric dispersion data bases.

  7. Change-point detection in time-series data by relative density-ratio estimation.

    PubMed

    Liu, Song; Yamada, Makoto; Collier, Nigel; Sugiyama, Masashi

    2013-07-01

    The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm based on non-parametric divergence estimation between time-series samples from two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on artificial and real-world datasets including human-activity sensing, speech, and Twitter messages, we demonstrate the usefulness of the proposed method. PMID:23500502

  8. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PMID:24885339

  9. A hybrid approach to crowd density estimation using statistical leaning and texture classification

    NASA Astrophysics Data System (ADS)

    Li, Yin; Zhou, Bowen

    2013-12-01

    Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.

  10. Estimation of nighttime dip-equatorial E-region current density using measurements and models

    NASA Astrophysics Data System (ADS)

    Pandey, Kuldeep; Sekar, R.; Anandarao, B. G.; Gupta, S. P.; Chakrabarty, D.

    2016-08-01

    The existence of the possible ionospheric current during nighttime over low-equatorial latitudes is one of the unresolved issues in ionospheric physics and geomagnetism. A detailed investigation is carried out to estimate the same over Indian longitudes using in situ measurements from Thumba (8.5 ° N, 76.9 ° E), empirical plasma drift model (Fejer et al., 2008) and equatorial electrojet model developed by Anandarao (1976). This investigation reveals that the nighttime E-region current densities vary from ∼0.3 to ∼0.7 A/km2 during pre-midnight to early morning hours on geomagnetically quiet conditions. The nighttime current densities over the dip equator are estimated using three different methods (discussed in methodology section) and are found to be consistent with one another within the uncertainty limits. Altitude structures in the E-region current densities are also noticed which are shown to be associated with altitudinal structures in the electron densities. The horizontal component of the magnetic field induced by these nighttime ionospheric currents is estimated to vary between ∼2 and ∼6 nT during geomagnetically quiet periods. This investigation confirms the existence of nighttime ionospheric current and opens up a possibility of estimating base line value for geomagnetic field fluctuations as observed by ground-based magnetometer.

  11. ESTIMATION OF SOYBEAN ROOT LENGTH DENSITY DISTRIBUTION WITH DIRECT AND SENSOR BASED MEASUREMENTS OF CLAYPAN MORPHOLOGY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrologic and morphological properties of claypan landscapes cause variability in soybean root and shoot biomass. This study was conducted to develop predictive models of soybean root length density distribution (RLDd) using direct measurements and sensor based estimators of claypan morphology. A c...

  12. Estimation of size and number density of microbubbles based on analysis of frequency-dependent attenuation

    NASA Astrophysics Data System (ADS)

    Yoshida, Kenji; Tamura, Kazuki; Yamaguchi, Tadashi

    2016-07-01

    A method of estimating the size and number density of microbubbles in suspension is proposed, which matches the theoretically calculated frequency dependent attenuation coefficient with the experimental data. Assuming that the size distribution of bubbles is given by a log-normal function, three parameters (expected value and standard deviation of radius and the number density of bubbles) of Sonazoid® in the steady flow were estimated. Bubbles are exposed to ultrasound with a center frequency of 5 MHz and mechanical indices of 0.4, 0.5, 0.7, and 1.1. The expected value and standard deviation for the size distribution were estimated to be 70–85 and 45–60% of the reference values in the case of a lower mechanical index, respectively. The number density was estimated to be 20–30 times smaller than the reference values. This fundamental examination indicates that the number density of bubbles can be qualitatively evaluated by the proposed method.

  13. USING AERIAL HYPERSPECTRAL REMOTE SENSING IMAGERY TO ESTIMATE CORN PLANT STAND DENSITY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Since corn plant stand density is important for optimizing crop yield, several researchers have recently developed ground-based systems for automatic measurement of this crop growth parameter. Our objective was to use data from such a system to assess the potential for estimation of corn plant stan...

  14. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  15. Unbiased Estimate of Dark Energy Density from Type Ia Supernova Data

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Lovelace, Geoffrey

    2001-12-01

    Type Ia supernovae (SNe Ia) are currently the best probes of the dark energy in the universe. To constrain the nature of dark energy, we assume a flat universe and that the weak energy condition is satisfied, and we allow the density of dark energy, ρX(z), to be an arbitrary function of redshift. Using simulated data from a space-based SN pencil-beam survey, we find that by optimizing the number of parameters used to parameterize the dimensionless dark energy density, f(z)=ρX(z)/ρX(z=0), we can obtain an unbiased estimate of both f(z) and the fractional matter density of the universe, Ωm. A plausible SN pencil-beam survey (with a square degree field of view and for an observational duration of 1 yr) can yield about 2000 SNe Ia with 0<=z<=2. Such a survey in space would yield SN peak luminosities with a combined intrinsic and observational dispersion of σ(mint)=0.16 mag. We find that for such an idealized survey, Ωm can be measured to 10% accuracy, and the dark energy density can be estimated to ~20% to z~1.5, and ~20%-40% to z~2, depending on the time dependence of the true dark energy density. Dark energy densities that vary more slowly can be more accurately measured. For the anticipated Supernova/Acceleration Probe (SNAP) mission, Ωm can be measured to 14% accuracy, and the dark energy density can be estimated to ~20% to z~1.2. Our results suggest that SNAP may gain much sensitivity to the time dependence of the dark energy density and Ωm by devoting more observational time to the central pencil-beam fields to obtain more SNe Ia at z>1.2. We use both a maximum likelihood analysis and a Monte Carlo analysis (when appropriate) to determine the errors of estimated parameters. We find that the Monte Carlo analysis gives a more accurate estimate of the dark energy density than the maximum likelihood analysis.

  16. Bioenergetics estimate of the effects of stocking density on hatchery production of smallmouth bass fingerlings

    USGS Publications Warehouse

    Robel, G.L.; Fisher, W.L.

    1999-01-01

    Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.

  17. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  18. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  19. The effect of density estimation on the conservativeness in Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Suresh, Pranav; Kumar, S. S. Prasanna; Patnaik, B. S. V.

    2015-11-01

    Smoothed Particle Hydrodynamics (SPH) is a popular mesh-free method for solving a wide range of problems that involve interfaces. In SPH, the Lagrangian nature of the method enables mass conservation to be naturally satisfied. However, satisfying the conservation of momentum and energy are indeed formulation dependent. One major aspect of ensuring conservativeness comes from the density estimation. There are two distinct types of density estimation approaches, namely continuity density approach and summation density approach. Both approaches are indeed popular with single and multi-phase flow communities. In the present study, we assess the role of density evaluation on the conservativeness, using several representative numerical examples. In particular, we have simulated the Rayleigh-Taylor instability problem, Non-Boussinesq lock exchange problem, bubble rise in water column etc. Although for shorter time scales of simulation, both methods have similar conservative properties, we observe that for longer time scales, summation-density approach is better. For free surface detection and normal vector computations, efficient computational procedures have been devised.

  20. Density estimation of small-mammal populations using a trapping web and distance sampling methods

    USGS Publications Warehouse

    Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.

    1983-01-01

    Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.

  1. Reader Variability in Breast Density Estimation from Full-Field Digital Mammograms

    PubMed Central

    Keller, Brad M.; Nathan, Diane L.; Gavenonis, Sara C.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2013-01-01

    Rationale and Objectives Mammographic breast density, a strong risk factor for breast cancer, may be measured as either a relative percentage of dense (ie, radiopaque) breast tissue or as an absolute area from either raw (ie, “for processing”) or vendor postprocessed (ie, “for presentation”) digital mammograms. Given the increasing interest in the incorporation of mammographic density in breast cancer risk assessment, the purpose of this study is to determine the inherent reader variability in breast density assessment from raw and vendor-processed digital mammograms, because inconsistent estimates could to lead to misclassification of an individual woman’s risk for breast cancer. Materials and Methods Bilateral, mediolateral-oblique view, raw, and processed digital mammograms of 81 women were retrospectively collected for this study (N = 324 images). Mammographic percent density and absolute dense tissue area estimates for each image were obtained from two radiologists using a validated, interactive software tool. Results The variability of interreader agreement was not found to be affected by the image presentation style (ie, raw or processed, F-test: P > .5). Interreader estimates of relative and absolute breast density are strongly correlated (Pearson r > 0.84, P < .001) but systematically different (t-test, P < .001) between the two readers. Conclusion Our results show that mammographic density may be assessed with equal reliability from either raw or vendor postprocessed images. Furthermore, our results suggest that the primary source of density variability comes from the subjectivity of the individual reader in assessing the absolute amount of dense tissue present in the breast, indicating the need to use standardized tools to mitigate this effect. PMID:23465381

  2. Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations

    NASA Astrophysics Data System (ADS)

    Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

    2009-04-01

    Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

  3. Multiscale seismic characterization of marine sediments by using a wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Ker, Stephan; Le Gonidec, Yves; Gibert, Dominique

    2015-04-01

    We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of reflected seismic waves. This method is developed in the framework of the wavelet response (WR) where dilated wavelets are used to sound a complex seismic reflector defined by a multiscale impedance structure. In the context of seismic imaging, we use the WR as a multiscale seismic attributes, in particular ridge functions which contain most of the information that quantifies the complex geometry of the reflector. We extend this approach by considering its application to analyse seismic data acquired with broadband but frequency limited source signals. The band-pass filter related to such actual sources distort the WR: in order to remove these effects, we develop an original processing based on fractional derivatives of Lévy alpha-stable distributions in the formalism of the continuous wavelet transform (CWT). We demonstrate that the CWT of a seismic trace involving such a finite frequency bandwidth can be made equivalent to the CWT of the impulse response of the subsurface and is defined for a reduced range of dilations, controlled by the seismic source signal. In this dilation range, the multiscale seismic attributes are corrected from distortions and we can thus merge multiresolution seismic sources to increase the frequency range of the mutliscale analysis. As a first demonstration, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed seismic device and we show that both can now be perfectly merged into an equivalent seismic source with an improved frequency bandwidth (220-2200 Hz). Such multiresolution seismic data fusion allows reconstructing the acoustic impedance of the subseabed based on the inverse wavelet transform properties extended to the source-corrected WR. We illustrate the potential of this approach with deep-water seismic data acquired during the ERIG3D cruise and we compare

  4. A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars

    NASA Astrophysics Data System (ADS)

    Zou, Hong; Ye, Yu Guang; Wang, Jin Song; Nielsen, Erling; Cui, Jun; Wang, Xiao Dong

    2016-04-01

    A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars is introduced in this study. The neutral densities at 130 km can be derived from the ionospheric and atmospheric measurements of the Radio Science experiment on board Mars Global Surveyor (MGS). The derived neutral densities cover a large longitude range in northern high latitudes from summer to late autumn during 3 Martian years, which fills the gap of the previous observations for the upper atmosphere of Mars. The simulations of the Laboratoire de Météorologie Dynamique Mars global circulation model can be corrected with a simple linear equation to fit the neutral densities derived from the first MGS/RS (Radio Science) data sets (EDS1). The corrected simulations with the same correction parameters as for EDS1 match the derived neutral densities from two other MGS/RS data sets (EDS2 and EDS3) very well. The derived neutral density from EDS3 shows a dust storm effect, which is in accord with the Mars Express (MEX) Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars measurement. The neutral density derived from the MGS/RS measurements can be used to validate the Martian atmospheric models. The method presented in this study can be applied to other radio occultation measurements, such as the result of the Radio Science experiment on board MEX.

  5. Estimation of energy density of Li-S batteries with liquid and solid electrolytes

    NASA Astrophysics Data System (ADS)

    Li, Chunmei; Zhang, Heng; Otaegui, Laida; Singh, Gurpreet; Armand, Michel; Rodriguez-Martinez, Lide M.

    2016-09-01

    With the exponential growth of technology in mobile devices and the rapid expansion of electric vehicles into the market, it appears that the energy density of the state-of-the-art Li-ion batteries (LIBs) cannot satisfy the practical requirements. Sulfur has been one of the best cathode material choices due to its high charge storage (1675 mAh g-1), natural abundance and easy accessibility. In this paper, calculations are performed for different cell design parameters such as the active material loading, the amount/thickness of electrolyte, the sulfur utilization, etc. to predict the energy density of Li-S cells based on liquid, polymeric and ceramic electrolytes. It demonstrates that Li-S battery is most likely to be competitive in gravimetric energy density, but not volumetric energy density, with current technology, when comparing with LIBs. Furthermore, the cells with polymer and thin ceramic electrolytes show promising potential in terms of high gravimetric energy density, especially the cells with the polymer electrolyte. This estimation study of Li-S energy density can be used as a good guidance for controlling the key design parameters in order to get desirable energy density at cell-level.

  6. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    SciTech Connect

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  7. Joint estimation of crown of thorns (Acanthaster planci) densities on the Great Barrier Reef

    PubMed Central

    Mellin, Camille; Pratchett, Morgan S.; Hoey, Jessica; Anthony, Kenneth R.N.; Cheal, Alistair J.; Miller, Ian; Sweatman, Hugh; Cowan, Zara L.; Taylor, Sascha; Moon, Steven; Fonnesbeck, Chris J.

    2016-01-01

    Crown-of-thorns starfish (CoTS; Acanthaster spp.) are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1) estimating the detectability of adult CoTS on typical underwater visual count (UVC) surveys using covariates and (2) inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR). We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD) and [95% uncertainty intervals]), with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014.

  8. Density estimation in a wolverine population using spatial capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.

    2011-01-01

    Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.

  9. A pseudo wavelet-based method for accurate tagline tracing on tagged MR images of the tongue

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaohui; Ozturk, Cengizhan; Chi-Fishman, Gloria

    2006-03-01

    In this paper, we present a pseudo wavelet-based tagline detection method. The tagged MR image is transformed to the wavelet domain, and the prominent tagline coefficients are retained while others are eliminated. Significant stripes are extracted via segmentation, which are mixtures of tags and anatomical boundary that resembles line. A refinement step follows such that broken lines or isolated points are grouped or eliminated. Without assumption on tag models, our method extracts taglines automatically regardless their width and spacing. In addition, founded on the multi-resolution wavelet analysis, our method reconstructs taglines precisely and shows great robustness to various types of taglines.

  10. Probability Density Estimation Using Isocontours and Isosurfaces: Application to Information-Theoretic Image Registration

    PubMed Central

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2010-01-01

    We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876

  11. Estimation of the density of the clay-organic complex in soil

    NASA Astrophysics Data System (ADS)

    Czyż, Ewa A.; Dexter, Anthony R.

    2016-01-01

    Soil bulk density was investigated as a function of soil contents of clay and organic matter in arable agricultural soils at a range of locations. The contents of clay and organic matter were used in an algorithmic procedure to calculate the amounts of clay-organic complex in the soils. Values of soil bulk density as a function of soil organic matter content were used to estimate the amount of pore space occupied by unit amount of complex. These estimations show that the effective density of the clay-organic matter complex is very low with a mean value of 0.17 ± 0.04 g ml-1 in arable soils. This value is much smaller than the soil bulk density and smaller than any of the other components of the soil considered separately (with the exception of the gas content). This low value suggests that the clay-soil complex has an extremely porous and open structure. When the complex is considered as a separate phase in soil, it can account for the observed reduction of bulk density with increasing content of organic matter.

  12. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  13. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  14. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    USGS Publications Warehouse

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification

  15. Scatterer number density considerations in reference phantom-based attenuation estimation.

    PubMed

    Rubert, Nicholas; Varghese, Tomy

    2014-07-01

    Attenuation estimation and imaging have the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue-mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and a signal-to-noise ratio (SNR) approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1647 regions of interest in five ex vivo bovine livers, we obtained an envelope SNR of 1.10 ± 0.12 when the tissue was imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article, we examine attenuation estimation in numerical phantoms, tissue-mimicking phantoms with variable SNDs and ex vivo bovine liver before and after thermal coagulation. We find that reference phantom-based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SNDs, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find that the standard deviation of attenuation slope estimates increases from 0.07 to 0.25 dB/cm-MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in tissue-mimicking phantoms with a large estimation kernel size (16 mm axially × 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (<0.01 dB/cm-MHz). We also compare results obtained with reference phantom-based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800

  16. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone

  17. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  18. Combining Breeding Bird Survey and distance sampling to estimate density of migrant and breeding birds

    USGS Publications Warehouse

    Somershoe, S.G.; Twedt, D.J.; Reid, B.

    2006-01-01

    We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

  19. Wavelet-based Time Series Bootstrap Approach for Multidecadal Hydrologic Projections Using Observed and Paleo Data of Climate Indicators

    NASA Astrophysics Data System (ADS)

    Erkyihun, S. T.

    2013-12-01

    Understanding streamflow variability and the ability to generate realistic scenarios at multi-decadal time scales is important for robust water resources planning and management in any River Basin - more so on the Colorado River Basin with its semi-arid climate and highly stressed water resources It is increasingly evident that large scale climate forcings such as El Nino Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) are known to modulate the Colorado River Basin hydrology at multi-decadal time scales. Thus, modeling these large scale Climate indicators is important to then conditionally modeling the multi-decadal streamflow variability. To this end, we developed a simulation model that combines the wavelet-based time series method, Wavelet Auto Regressive Moving Average (WARMA) with a K-nearest neighbor (K-NN) bootstrap approach. In this, for a given time series (climate forcings), dominant periodicities/frequency bands are identified from the wavelet spectrum that pass the 90% significant test. The time series is filtered at these frequencies in each band to create ';components'; the components are orthogonal and when added to the residual (i.e., noise) results in the original time series. The components, being smooth, are easily modeled using parsimonious Auto Regressive Moving Average (ARMA) time series models. The fitted ARMA models are used to simulate the individual components which are added to obtain simulation of the original series. The WARMA approach is applied to all the climate forcing indicators which are used to simulate multi-decadal sequences of these forcing. For the current year, the simulated forcings are considered the ';feature vector' and K-NN of this are identified; one of the neighbors (i.e., one of the historical year) is resampled using a weighted probability metric (with more weights to nearest neighbor and least to the farthest) and the corresponding streamflow is the

  20. Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type

    NASA Astrophysics Data System (ADS)

    Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.

    2016-01-01

    Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international data sets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon type bulk densities using local known bulk density data sets. Then the best performing of the pedotransfer functions were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data were missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known data sets had the best predictions, using the recalibrated and validated pedotransfer functions.

  1. A method for estimating the height of a mesospheric density level using meteor radar

    NASA Astrophysics Data System (ADS)

    Younger, J. P.; Reid, I. M.; Vincent, R. A.; Murphy, D. J.

    2015-07-01

    A new technique for determining the height of a constant density surface at altitudes of 78-85 km is presented. The first results are derived from a decade of observations by a meteor radar located at Davis Station in Antarctica and are compared with observations from the Microwave Limb Sounder instrument aboard the Aura satellite. The density of the neutral atmosphere in the mesosphere/lower thermosphere region around 70-110 km is an essential parameter for interpreting airglow-derived atmospheric temperatures, planning atmospheric entry maneuvers of returning spacecraft, and understanding the response of climate to different stimuli. This region is not well characterized, however, due to inaccessibility combined with a lack of consistent strong atmospheric radar scattering mechanisms. Recent advances in the analysis of detection records from high-performance meteor radars provide new opportunities to obtain atmospheric density estimates at high time resolutions in the MLT region using the durations and heights of faint radar echoes from meteor trails. Previous studies have indicated that the expected increase in underdense meteor radar echo decay times with decreasing altitude is reversed in the lower part of the meteor ablation region due to the neutralization of meteor plasma. The height at which the gradient of meteor echo decay times reverses is found to occur at a fixed atmospheric density. Thus, the gradient reversal height of meteor radar diffusion coefficient profiles can be used to infer the height of a constant density level, enabling the observation of mesospheric density variations using meteor radar.

  2. Estimation of high-resolution dust column density maps. Empirical model fits

    NASA Astrophysics Data System (ADS)

    Juvela, M.; Montillaud, J.

    2013-09-01

    Context. Sub-millimetre dust emission is an important tracer of column density N of dense interstellar clouds. One has to combine surface brightness information at different spatial resolutions, and specific methods are needed to derive N at a resolution higher than the lowest resolution of the observations. Some methods have been discussed in the literature, including a method (in the following, method B) that constructs the N estimate in stages, where the smallest spatial scales being derived only use the shortest wavelength maps. Aims: We propose simple model fitting as a flexible way to estimate high-resolution column density maps. Our goal is to evaluate the accuracy of this procedure and to determine whether it is a viable alternative for making these maps. Methods: The new method consists of model maps of column density (or intensity at a reference wavelength) and colour temperature. The model is fitted using Markov chain Monte Carlo methods, comparing model predictions with observations at their native resolution. We analyse simulated surface brightness maps and compare its accuracy with method B and the results that would be obtained using high-resolution observations without noise. Results: The new method is able to produce reliable column density estimates at a resolution significantly higher than the lowest resolution of the input maps. Compared to method B, it is relatively resilient against the effects of noise. The method is computationally more demanding, but is feasible even in the analysis of large Herschel maps. Conclusions: The proposed empirical modelling method E is demonstrated to be a good alternative for calculating high-resolution column density maps, even with considerable super-resolution. Both methods E and B include the potential for further improvements, e.g., in the form of better a priori constraints.

  3. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    SciTech Connect

    Stewart, Robert N; White, Devin A; Urban, Marie L; Morton, April M; Webster, Clayton G; Stoyanov, Miroslav K; Bright, Eddie A; Bhaduri, Budhendra L

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.

  4. Stochastic estimation of level density in nuclear shell-model calculations

    NASA Astrophysics Data System (ADS)

    Shimizu, Noritaka; Utsuno, Yutaka; Futamura, Yasunori; Sakurai, Tetsuya; Mizusaki, Takahiro; Otsuka, Takaharu

    2016-06-01

    An estimation method of the nuclear level density stochastically based on nuclear shell-model calculations is introduced. In order to count the number of the eigen-values of the shell-model Hamiltonian matrix, we perform the contour integral of the matrix element of a resolvent. The shifted block Krylov subspace method enables us its efficient computation. Utilizing this method, the contamination of center-of-mass motion is clearly removed.

  5. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    USGS Publications Warehouse

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  6. New Density Estimation Methods for Charged Particle Beams With Applications to Microbunching Instability

    SciTech Connect

    Balsa Terzic, Gabriele Bassi

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  7. Use of prediction methods to estimate true density of active pharmaceutical ingredients.

    PubMed

    Cao, Xiaoping; Leyva, Norma; Anderson, Stephen R; Hancock, Bruno C

    2008-05-01

    True density is a fundamental and important property of active pharmaceutical ingredients (APIs). Using prediction methods to estimate the API true density can be very beneficial in pharmaceutical research and development, especially when experimental measurements cannot be made due to lack of material or sample handling restrictions. In this paper, two empirical prediction methods developed by Girolami and Immirzi and Perini were used to estimate the true density of APIs, and the estimation results were compared with experimentally measured values by helium pycnometry. The Girolami method is simple and can be used for both liquids and solids. For the tested APIs, the Girolami method had a maximum error of -12.7% and an average percent error of -3.0% with a 95% CI of (-3.8, -2.3%). The Immirzi and Perini method is more involved and is mainly used for solid crystals. In general, it gives better predictions than the Girolami method. For the tested APIs, the Immirzi and Perini method had a maximum error of 9.6% and an average percent error of 0.9% with a 95% CI of (0.3, 1.6%). PMID:18242023

  8. Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization.

    PubMed

    Tangkaratt, Voot; Xie, Ning; Sugiyama, Masashi

    2015-01-01

    Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challenging in high-dimensional space. A naive approach to coping with high dimensionality is to first perform dimensionality reduction (DR) and then execute CDE. However, a two-step process does not perform well in practice because the error incurred in the first DR step can be magnified in the second CDE step. In this letter, we propose a novel single-shot procedure that performs CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR as the problem of minimizing a squared-loss variant of conditional entropy, and this is solved using CDE. Thus, an additional CDE step is not needed after DR. We demonstrate the usefulness of the proposed method through extensive experiments on various data sets, including humanoid robot transition and computer art. PMID:25380340

  9. Examining the impact of the precision of address geocoding on estimated density of crime locations

    NASA Astrophysics Data System (ADS)

    Harada, Yutaka; Shimada, Takahito

    2006-10-01

    This study examines the impact of the precision of address geocoding on the estimated density of crime locations in a large urban area of Japan. The data consist of two separate sets of the same Penal Code offenses known to the police that occurred during a nine-month period of April 1, 2001 through December 31, 2001 in the central 23 wards of Tokyo. These two data sets are derived from older and newer recording system of the Tokyo Metropolitan Police Department (TMPD), which revised its crime reporting system in that year so that more precise location information than the previous years could be recorded. Each of these data sets was address-geocoded onto a large-scale digital map, using our hierarchical address-geocoding schema, and was examined how such differences in the precision of address information and the resulting differences in address-geocoded incidence locations affect the patterns in kernel density maps. An analysis using 11,096 pairs of incidences of residential burglary (each pair consists of the same incidents geocoded using older and newer address information, respectively) indicates that the kernel density estimation with a cell size of 25×25 m and a bandwidth of 500 m may work quite well in absorbing the poorer precision of geocoded locations based on data from older recording system, whereas in several areas where older recording system resulted in very poor precision level, the inaccuracy of incident locations may produce artifactitious and potentially misleading patterns in kernel density maps.

  10. Fracture density estimation from petrophysical log data using the adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Ja'fari, Ahmad; Kadkhodaie-Ilkhchi, Ali; Sharghi, Yoosef; Ghanavati, Kiarash

    2012-02-01

    Fractures as the most common and important geological features have a significant share in reservoir fluid flow. Therefore, fracture detection is one of the important steps in fractured reservoir characterization. Different tools and methods are introduced for fracture detection from which formation image logs are considered as the common and effective tools. Due to the economical considerations, image logs are available for a limited number of wells in a hydrocarbon field. In this paper, we suggest a model to estimate fracture density from the conventional well logs using an adaptive neuro-fuzzy inference system. Image logs from two wells of the Asmari formation in one of the SW Iranian oil fields are used to verify the results of the model. Statistical data analysis indicates good correlation between fracture density and well log data including sonic, deep resistivity, neutron porosity and bulk density. The results of this study show that there is good agreement (correlation coefficient of 98%) between the measured and neuro-fuzzy estimated fracture density.

  11. Estimation of Graphite Density and mechanical Strength of VHTR during Air-Ingress Accident

    SciTech Connect

    Chang Oh; Eung Soo Kim; Hee Cheon No; Byung Jun Kim

    2007-09-01

    An air-ingress accident in a VHTR is anticipated to cause severe changes of graphite density and mechanical strength by oxidation process resulting in many side effects. However, the quantitative estimation has not been performed yet. In this study, the focus has been on the prediction of graphite density change and mechanical strength using a thermal hydraulic system analysis code. For analysis of the graphite density change, a simple graphite burn-off model was developed based on the similarity concept between parallel electrical circuit and graphite oxidation considering the overall changes of the graphite geometry and density. The developed model was implemented in the VHTR system analysis code, GAMMA, along with other comprehensive graphite oxidation models. As a reference reactor, GT-MHR 600 MWt reactor was selected. From the calculation, it was observed that the main oxidation process was derived 5.5 days after the accident following natural convection. The core maximum temperature reached up to 1400 C. However it never exceeded the maximum temperature criteria, 1600 C. According to the calculation results, the most oxidation occurs in the bottom reflector, so the exothermic heat generated by oxidation did not affect the core heat up. However, the oxidation process highly decreased the density of the bottom reflector making it vulnerable to mechanical stress. In fact, since the bottom reflector sustains the reactor core, the stress is highly concentrated on this part. The calculations were made for up to 11 days after the accident and 4.5% of density decrease was estimated resulting in 25% mechanical strength reduction.

  12. Neutral density estimation derived from meteoroid measurements using high-power, large-aperture radar

    NASA Astrophysics Data System (ADS)

    Li, A.; Close, S.

    2016-07-01

    We present a new method to estimate the neutral density of the lower thermosphere/upper mesosphere given deceleration measurements from meteoroids as they enter Earth's atmosphere. By tracking the plasma (referred to as head echoes) surrounding the ablating meteoroid, we are able to measure the range and velocity of the meteoroid in 3-D. This is accomplished at Advanced Research Projects Agency Long-Range Tracking and Instrumentation Radar (ALTAIR) with the use of four additional receiving horns. Combined with the momentum and ablation equations, we can feed large quantities of data into a minimization function which estimates the associated constants related to the ablation process and, more importantly, the density ratios between successive layers of the atmosphere. Furthermore, if we take statistics of the masses and bulk densities of the meteoroids, we can calculate the neutral densities and its associated error by the ratio distribution on the minimum error statistic. A standard deviation of approximately 10% can be achieved, neglecting measurement error from the radar. Errors in velocity and deceleration compound this uncertainty, which in the best case amounts to an additional 4% error. The accuracy can be further improved if we take increasing amounts of measurements, limited only by the quality of the ranging measurements and the probability of knowing the median of the distribution. Data analyzed consist mainly of approximately 500 meteoroids over a span of 20 min on two separate days. The results are compared to the existing atmospheric model NRLMSISE-00, which predicts lower density ratios and static neutral densities at these altitudes.

  13. Wavelet-based approaches for multiple hypothesis testing in activation mapping of functional magnetic resonance images of the human brain

    NASA Astrophysics Data System (ADS)

    Fadili, Jalal M.; Bullmore, Edward T.

    2003-11-01

    Wavelet-based methods for multiple hypothesis testing are described and their potential for activation mapping of human functional magnetic resonance imaging (fMRI) data is investigated. In this approach, we emphasize convergence between methods of wavelet thresholding or shrinkage and the problem of multiple hypothesis testing in both classical and Bayesian contexts. Specifically, our interest will be focused on ensuring a trade off between type I probability error control and power dissipation. We describe a technique for controlling the false discovery rate at an arbitrary level of type 1 error in testing multiple wavelet coefficients generated by a 2D discrete wavelet transform (DWT) of spatial maps of {fMRI} time series statistics. We also describe and apply recursive testing methods that can be used to define a threshold unique to each level and orientation of the 2D-DWT. Bayesian methods, incorporating a formal model for the anticipated sparseness of wavelet coefficients representing the signal or true image, are also tractable. These methods are comparatively evaluated by analysis of "null" images (acquired with the subject at rest), in which case the number of positive tests should be exactly as predicted under the hull hypothesis, and an experimental dataset acquired from 5 normal volunteers during an event-related finger movement task. We show that all three wavelet-based methods of multiple hypothesis testing have good type 1 error control (the FDR method being most conservative) and generate plausible brain activation maps.

  14. Wavelet-based time-dependent travel time tomography method and its application in imaging the Etna volcano in Italy

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Zhang, Haijiang

    2015-10-01

    It has been a challenge to image velocity changes in real time by seismic travel time tomography. If more seismic events are included in the tomographic system, the inverted velocity models do not have necessary time resolution to resolve velocity changes. But if fewer events are used for real-time tomography, the system is less stable and the inverted model may contain some artifacts, and thus, resolved velocity changes may not be real. To mitigate these issues, we propose a wavelet-based time-dependent double-difference (DD) tomography method. The new method combines the multiscale property of wavelet representation and the fast converging property of the simultaneous algebraic reconstruction technique to solve the velocity models at multiple scales for sequential time segments. We first test the new method using synthetic data constructed using real event and station distribution for Mount Etna volcano in Italy. Then we show its effectiveness to determine velocity changes for the 2001 and 2002 eruptions of Mount Etna volcano. Compared to standard DD tomography that uses seismic events from a longer time period, wavelet-based time-dependent tomography better resolves velocity changes that may be caused by fracture closure and opening as well as fluid migration before and after volcano eruptions.

  15. A Comparison of Wavelet-Based and Ridgelet-Based Texture Classification of Tissues in Computed Tomography

    NASA Astrophysics Data System (ADS)

    Semler, Lindsay; Dettori, Lucia

    The research presented in this article is aimed at developing an automated imaging system for classification of tissues in medical images obtained from Computed Tomography (CT) scans. The article focuses on using multi-resolution texture analysis, specifically: the Haar wavelet, Daubechies wavelet, Coiflet wavelet, and the ridgelet. The algorithm consists of two steps: automatic extraction of the most discriminative texture features of regions of interest and creation of a classifier that automatically identifies the various tissues. The classification step is implemented using a cross-validation Classification and Regression Tree approach. A comparison of wavelet-based and ridgelet-based algorithms is presented. Tests on a large set of chest and abdomen CT images indicate that, among the three wavelet-based algorithms, the one using texture features derived from the Haar wavelet transform clearly outperforms the one based on Daubechies and Coiflet transform. The tests also show that the ridgelet-based algorithm is significantly more effective and that texture features based on the ridgelet transform are better suited for texture classification in CT medical images.

  16. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  17. Estimation of loading density of underground well repositories for solid high-level radioactive wastes

    NASA Astrophysics Data System (ADS)

    Malkovsky, V. I.; Pek, A. A.

    2007-06-01

    The convective transfer of radionuclides by subsurface water from a geological repository of solidified high-level radioactive wastes (HLW) is considered. The repository is a cluster of wells of large diameter with HLW disposed of in the lower portions of the wells. The safe distance between wells as a function of rock properties and parameters of well loading with wastes has been estimated from mathematical modeling. A maximum permissible concentration of radionuclides in subsurface water near the ground surface above the repository is regarded as a necessary condition of safety. The estimates obtained show that well repositories allow for a higher density of solid HLW disposal than shaft storage facilities. Advantages and disadvantages of both types of storage facilities are considered in order to estimate the prospects for their use for underground disposal of solid HLW.

  18. mBEEF-vdW: Robust fitting of error estimation density functionals

    NASA Astrophysics Data System (ADS)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas

    2016-06-01

    We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.

  19. Pedotransfer functions for Irish soils - estimation of bulk density (ρb) per horizon type

    NASA Astrophysics Data System (ADS)

    Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.

    2015-10-01

    Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international datasets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon types using local known bulk density datasets. Then the best performing of the pedotransfer functions, were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data was missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known datasets had the best predictions, using the recalibrated and validated pedotransfer functions.

  20. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America.

    PubMed

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ∼3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic

  1. Heterogeneous Occupancy and Density Estimates of the Pathogenic Fungus Batrachochytrium dendrobatidis in Waters of North America

    PubMed Central

    Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic

  2. Heterogeneous occupancy and density estimates of the pathogenic fungus Batrachochytrium dendrobatidis in waters of North America

    USGS Publications Warehouse

    Chestnut, Tara E.; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie

    2014-01-01

    Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L−1. The highest density observed was ∼3 million zoospores L−1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic

  3. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    USGS Publications Warehouse

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  4. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles

    PubMed Central

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  5. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric

  6. The Mass of Saturn's B ring from hidden density waves

    NASA Astrophysics Data System (ADS)

    Hedman, M. M.; Nicholson, P. D.

    2015-12-01

    The B ring is Saturn's brightest and most opaque ring, but many of its fundamental parameters, including its total mass, are not well constrained. Elsewhere in the rings, the best mass density estimates come from spiral waves driven by mean-motion resonances with Saturn's various moons, but such waves have been hard to find in the B ring. We have developed a new wavelet-based technique, for combining data from multiple stellar occultations that allows us to isolate the density wave signals from other ring structures. This method has been applied to 5 density waves using 17 occultations of the star gamma Crucis observed by the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft. Two of these waves (generated by the Janus 2:1 and Mimas 5:2 Inner Lindblad Resonances) are visible in individual occultation profiles, but the other three wave signatures ( associated with the Janus 3:2, Enceladus 3:1 and Pandora 3:2 Inner Lindblad Resonances ) are not visible in individual profiles and can only be detected in the combined dataset. Estimates of the ring's surface mass density derived from these five waves fall between 40 and 140 g/cm^2. Surprisingly, these mass density estimates show no obvious correlation with the ring's optical depth. Furthermore, these data indicate that the total mass of the B ring is probably between one-third and two-thirds the mass of Saturn's moon Mimas.

  7. Detection and density estimation of goblet cells in confocal endoscopy for the evaluation of celiac disease.

    PubMed

    Boschetto, D; Mirzaei, H; Leong, R W L; Grisan, E

    2015-08-01

    Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic computer-based method for the detection of goblet cells from confocal endoscopy images, whose density changes in case of pathological tissue. After a manual selection of a suitable region of interest, the candidate columnar and goblet cells' centers are first detected and the cellular architecture is estimated from their position using a Voronoi diagram. The region within each Voronoi cell is then analyzed and classified as goblet cell or other. The results suggest that our method is able to detect and label goblet cells immersed in a columnar epithelium in a fast, reliable and automatic way. Accepting 0.44 false positives per image, we obtain a sensitivity value of 90.3%. Furthermore, estimated and real goblet cell densities are comparable (error: 9.7 ± 16.9%, correlation: 87.2%, R(2) = 76%). PMID:26737720

  8. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  9. Estimating black bear population density and genetic diversity at Tensas River, Louisiana using microsatellite DNA markers

    USGS Publications Warehouse

    Boersen, Mark R.; Clark, Joseph D.; King, Tim L.

    2003-01-01

    The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.

  10. Density-based load estimation using two-dimensional finite element models: a parametric study.

    PubMed

    Bona, Max A; Martin, Larry D; Fischer, Kenneth J

    2006-08-01

    A parametric investigation was conducted to determine the effects on the load estimation method of varying: (1) the thickness of back-plates used in the two-dimensional finite element models of long bones, (2) the number of columns of nodes in the outer medial and lateral sections of the diaphysis to which the back-plate multipoint constraints are applied and (3) the region of bone used in the optimization procedure of the density-based load estimation technique. The study is performed using two-dimensional finite element models of the proximal femora of a chimpanzee, gorilla, lion and grizzly bear. It is shown that the density-based load estimation can be made more efficient and accurate by restricting the stimulus optimization region to the metaphysis/epiphysis. In addition, a simple method, based on the variation of diaphyseal cortical thickness, is developed for assigning the thickness to the back-plate. It is also shown that the number of columns of nodes used as multipoint constraints does not have a significant effect on the method. PMID:17132530

  11. Kernel density estimation-based real-time prediction for respiratory motion

    NASA Astrophysics Data System (ADS)

    Ruan, Dan

    2010-03-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  12. Estimation of graphite density and mechanical strength variation of VHTR during air-ingress accident

    SciTech Connect

    Eung Soo Kim

    2008-04-01

    An air-ingress accident in a Very High Temperature Gas-Cooled Reactor (VHTR) is anticipated to cause severe changes to graphite density and mechanical strength by an oxidation process that has many side effects. However, quantitative estimations have not yet been performed. This study focuses on predicting the changes in graphite density and mechanical strength via thermal hydraulic system analysis code. In order to analyze the change in graphite density, a simple graphite burn-off model was developed. The model is based on the similarities between a parallel electrical circuit and graphite oxidation. It was used to determine overall changes in the graphite’s geometry and density. The model was validated by comparing its results to experimental data that was obtained for several temperatures. In the experiment, cylindrically shaped graphite specimens were oxidized in an electrical furnace and the variations of its mass were measured against time. The experiment’s range covered temperatures between 6000C and 9000 C. Experimental data validated the model’s accuracy. Finally, the developed model along with other comprehensive graphite oxidation models was integrated into the VHTR system analysis code, GAMMA. GT-MHR 600 MWt reactor was selected as a reference reactor. Based on the calculation, the main oxidation process was observed 5.5 days after the accident when followed by natural convection. The core maximum temperature reached 16000 C, but never exceeded the maximum temperature criteria, 18000 C. However, the oxidation process did significantly decrease the density of bottom reflector, making it vulnerable to mechanical stress. The stress on the bottom reflector is greatly increased because it sustains the reactor core. The calculation proceeded until 11 days after the accident, resulting in an observed 4.5% decrease in density and a 25% reduction of mechanical strength.

  13. Wavelet-based signal processing of in vitro ultrasonic measurements at the proximal femur.

    PubMed

    Dencks, Stefanie; Barkmann, Reinhard; Padilla, Frédéric; Haïat, Guillaume; Laugier, Pascal; Glüer, Claus-C

    2007-06-01

    To estimate osteoporotic fracture risk, several techniques for quantitative ultrasound (QUS) measurements at peripheral sites have been developed. As these techniques are limited in the prediction of fracture risk of the central skeleton, such as the hip, we are developing a QUS device for direct measurements at the femur. In doing so, we noticed the necessity to improve the conventional signal processing because it failed in a considerable number of measurements due to multipath transmission. Two sets of excised human femurs (n = 6 + 34) were scanned in transmission mode. Instead of using the conventional methods, the radio-frequency signals were processed with the continuous wavelet transform to detect their time-of-flights for the calculation of speed-of-sound (SOS) in bone. The SOS-values were averaged over a region similar to the total hip region of dual X-ray absorptiometry (DXA) measurements and compared with bone mineral density (BMD) measured with DXA. Testing six standard wavelets, this algorithm failed for only 0% to 6% of scan in test set 1 compared with 29% when using conventional algorithms. For test set 2, it failed for 2% to 12% compared with approximately 40%. SOS and BMD correlated significantly in both test sets (test set 1: r2 = 0.87 to 0.92, p < 0.007; test set 2: r2 = 0.68 to 0.79, p < 0.0001). The correlations are comparable with correlations recently reported. However, the number of evaluable signals could be substantially increased, which improves the perspectives of the in vivo measurements. PMID:17445965

  14. Estimating the neutrally buoyant energy density of a Rankine-cycle/fuel-cell underwater propulsion system

    NASA Astrophysics Data System (ADS)

    Waters, Daniel F.; Cadou, Christopher P.

    2014-02-01

    A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (˜15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

  15. Magnetic fields, plasma densities, and plasma beta parameters estimated from high-frequency zebra fine structures

    NASA Astrophysics Data System (ADS)

    Karlický, M.; Jiricka, K.

    2002-10-01

    Using the recent model of the radio zebra fine structures (Ledenev et al. 2001) the magnetic fields, plasma densities, and plasma beta parameters are estimated from high-frequency zebra fine structures. It was found that in the flare radio source of high-frequency (1-2 GHz) zebras the densities and magnetic fields vary in the intervals of (1-4)×1010 cm-3 and 40-230 G, respectively. Assuming then the flare temperature as about of 107K, the plasma beta parameters in the zebra radio sources are in the 0.05-0.81 interval. Thus the plasma pressure effects in such radio sources, especially in those with many zebra lines, are not negligible.

  16. A maximum volume density estimator generalized over a proper motion-limited sample

    NASA Astrophysics Data System (ADS)

    Lam, Marco C.; Rowell, Nicholas; Hambly, Nigel C.

    2015-07-01

    The traditional Schmidt density estimator has been proven to be unbiased and effective in a magnitude-limited sample. Previously, efforts have been made to generalize it for populations with non-uniform density and proper motion-limited cases. This work shows that the then-good assumptions for a proper motion-limited sample are no longer sufficient to cope with modern data. Populations with larger differences in the kinematics as compared to the local standard of rest are most severely affected. We show that this systematic bias can be removed by treating the discovery fraction inseparable from the generalized maximum volume integrand. The treatment can be applied to any proper motion-limited sample with good knowledge of the kinematics. This work demonstrates the method through application to a mock catalogue of a white dwarf-only solar neighbourhood for various scenarios and compared against the traditional treatment using a survey with Pan-STARRS-like characteristics.

  17. Estimating the probability density of the scattering cross section from Rayleigh scattering experiments

    NASA Astrophysics Data System (ADS)

    Hengartner, Nicolas; Talbot, Lawrence; Shepherd, Ian; Bickel, Peter

    1995-06-01

    An important parameter in the experimental study of dynamics of combustion is the probability distribution of the effective Rayleigh scattering cross section. This cross section cannot be observed directly. Instead, pairs of measurements of laser intensities and Rayleigh scattering counts are observed. Our aim is to provide estimators for the probability density function of the scattering cross section from such measurements. The probability distribution is derived first for the number of recorded photons in the Rayleigh scattering experiment. In this approach the laser intensity measurements are treated as known covariates. This departs from the usual practice of normalizing the Rayleigh scattering counts by the laser intensities. For distributions supported on finite intervals two one based on expansion of the density in

  18. New density estimation methods for charged particle beams with applications to microbunching instability

    SciTech Connect

    Terzic, B.; Bassi, G.

    2011-07-08

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)G. Bassi and B. Terzic, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  19. New density estimation methods for charged particle beams with applications to microbunching instability

    NASA Astrophysics Data System (ADS)

    Terzić, Balša; Bassi, Gabriele

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  20. "Prospecting Asteroids: Indirect technique to estimate overall density and inner composition"

    NASA Astrophysics Data System (ADS)

    Such, Pamela

    2016-07-01

    Spectroscopic studies of asteroids make possible to obtain some information on their composition from the surface but say little about the innermost material, porosity and density of the object. In addition, spectroscopic observations are affected by the effects of "space weathering" produced by the bombardment of charged particles for certain materials that change their chemical structure, albedo and other physical properties, partly altering their chances of identification. Data such as the mass, size and density of the asteroids are essential at the time to propose space missions in order to determine the best candidates for space exploration and is of great importance to determine a priori any of them remotely from Earth. From many years ago its determined masses of largest asteroids studying the gravitational effects they have on smaller asteroids when they approach them (see Davis and Bender, 1977; Schubart and Matson, 1979; School et al 1987; Hoffman, 1989b, among others), but estimates of the masses of the smallest objects is limited to the effects that occur in extreme close encounters to other asteroids of similar size. This paper presents the results of a search for approaches of pair of asteroids that approximate distances less than 0.0004 UA (50,000 km) of each other in order to study their masses through the astrometric method and to estimate in a future their densities and internal composition. References Davis, D. R., and D. F. Bender. 1977. Asteroid mass determinations: search for futher encounter opportunities. Bull. Am. Astron. Soc. 9, 502-503. Hoffman, M. 1989b. Asteroid mass determination: Present situation and perspectives. In asteroids II (R. P. Binzel, T. Gehreis, and M. S. Matthews, Eds.), pp 228-239. Univ. Arizona Press, Tucson. School, H. L. D. Schmadel and S. Roser 1987. The mass of the asteroid (10) Hygiea derived from observations of (829) Academia. Astron. Astrophys. 179, 311-316. Schubart, J. And D. L. Matson 1979. Masses and

  1. Efficient 3D movement-based kernel density estimator and application to wildlife ecology

    USGS Publications Warehouse

    Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.

    2014-01-01

    We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.

  2. Estimations of electron densities and temperatures in He-3 dominated plasmas. [in nuclear pumped lasers

    NASA Technical Reports Server (NTRS)

    Depaola, B. D.; Marcum, S. D.; Wrench, H. K.; Whitten, B. L.; Wells, W. E.

    1979-01-01

    It is very useful to have a method of estimation for electron temperature and electron densities in nuclear pumped plasmas because measurements of such quantities are very difficult. This paper describes a method, based on rate equation analysis of the ionized species in the plasma and the electron energy balance. In addition to the ionized species, certain neutral species must also be calculated. Examples are given for pure helium and a mixture of helium and argon. In the HeAr case, He(+), He2(+), He/2 3S/, Ar(+), Ar2(+), and excited Ar are evaluated.

  3. Bayesian semiparametric power spectral density estimation with applications in gravitational wave data analysis

    NASA Astrophysics Data System (ADS)

    Edwards, Matthew C.; Meyer, Renate; Christensen, Nelson

    2015-09-01

    The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a blocked Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with nonstationary data by breaking longer data streams into smaller and locally stationary components.

  4. ANNz2 - Photometric redshift and probability density function estimation using machine-learning

    NASA Astrophysics Data System (ADS)

    Sadeh, Iftach

    2014-05-01

    Large photometric galaxy surveys allow the study of questions at the forefront of science, such as the nature of dark energy. The success of such surveys depends on the ability to measure the photometric redshifts of objects (photo-zs), based on limited spectral data. A new major version of the public photo-z estimation software, ANNz , is presented here. The new code incorporates several machine-learning methods, such as artificial neural networks and boosted decision/regression trees, which are all used in concert. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions in two independent ways.

  5. Daniell method for power spectral density estimation in atomic force microscopy.

    PubMed

    Labuda, Aleksander

    2016-03-01

    An alternative method for power spectral density (PSD) estimation--the Daniell method--is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion--the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to a more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum. PMID:27036781

  6. Management of Deep Brain Stimulator Battery Failure: Battery Estimators, Charge Density, and Importance of Clinical Symptoms

    PubMed Central

    Fakhar, Kaihan; Hastings, Erin; Butson, Christopher R.; Foote, Kelly D.; Zeilman, Pam; Okun, Michael S.

    2013-01-01

    Objective We aimed in this investigation to study deep brain stimulation (DBS) battery drain with special attention directed toward patient symptoms prior to and following battery replacement. Background Previously our group developed web-based calculators and smart phone applications to estimate DBS battery life (http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator). Methods A cohort of 320 patients undergoing DBS battery replacement from 2002–2012 were included in an IRB approved study. Statistical analysis was performed using SPSS 20.0 (IBM, Armonk, NY). Results The mean charge density for treatment of Parkinson’s disease was 7.2 µC/cm2/phase (SD = 3.82), for dystonia was 17.5 µC/cm2/phase (SD = 8.53), for essential tremor was 8.3 µC/cm2/phase (SD = 4.85), and for OCD was 18.0 µC/cm2/phase (SD = 4.35). There was a significant relationship between charge density and battery life (r = −.59, p<.001), as well as total power and battery life (r = −.64, p<.001). The UF estimator (r = .67, p<.001) and the Medtronic helpline (r = .74, p<.001) predictions of battery life were significantly positively associated with actual battery life. Battery status indicators on Soletra and Kinetra were poor predictors of battery life. In 38 cases, the symptoms improved following a battery change, suggesting that the neurostimulator was likely responsible for symptom worsening. For these cases, both the UF estimator and the Medtronic helpline were significantly correlated with battery life (r = .65 and r = .70, respectively, both p<.001). Conclusions Battery estimations, charge density, total power and clinical symptoms were important factors. The observation of clinical worsening that was rescued following neurostimulator replacement reinforces the notion that changes in clinical symptoms can be associated with battery drain. PMID:23536810

  7. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  8. Exploring neural directed interactions with transfer entropy based on an adaptive kernel density estimator.

    PubMed

    Zuo, K; Bellanger, J J; Yang, C; Shu, H; Le Bouquin Jeannés, R

    2013-01-01

    This paper aims at estimating causal relationships between signals to detect flow propagation in autoregressive and physiological models. The main challenge of the ongoing work is to discover whether neural activity in a given structure of the brain influences activity in another area during epileptic seizures. This question refers to the concept of effective connectivity in neuroscience, i.e. to the identification of information flows and oriented propagation graphs. Past efforts to determine effective connectivity rooted to Wiener causality definition adapted in a practical form by Granger with autoregressive models. A number of studies argue against such a linear approach when nonlinear dynamics are suspected in the relationship between signals. Consequently, nonlinear nonparametric approaches, such as transfer entropy (TE), have been introduced to overcome linear methods limitations and promoted in many studies dealing with electrophysiological signals. Until now, even though many TE estimators have been developed, further improvement can be expected. In this paper, we investigate a new strategy by introducing an adaptive kernel density estimator to improve TE estimation. PMID:24110694

  9. A more appropriate white blood cell count for estimating malaria parasite density in Plasmodium vivax patients in northeastern Myanmar.

    PubMed

    Liu, Huaie; Feng, Guohua; Zeng, Weilin; Li, Xiaomei; Bai, Yao; Deng, Shuang; Ruan, Yonghua; Morris, James; Li, Siman; Yang, Zhaoqing; Cui, Liwang

    2016-04-01

    The conventional method of estimating parasite densities employ an assumption of 8000 white blood cells (WBCs)/μl. However, due to leucopenia in malaria patients, this number appears to overestimate parasite densities. In this study, we assessed the accuracy of parasite density estimated using this assumed WBC count in eastern Myanmar, where Plasmodium vivax has become increasingly prevalent. From 256 patients with uncomplicated P. vivax malaria, we estimated parasite density and counted WBCs by using an automated blood cell counter. It was found that WBC counts were not significantly different between patients of different gender, axillary temperature, and body mass index levels, whereas they were significantly different between age groups of patients and the time points of measurement. The median parasite densities calculated with the actual WBC counts (1903/μl) and the assumed WBC count of 8000/μl (2570/μl) were significantly different. We demonstrated that using the assumed WBC count of 8000 cells/μl to estimate parasite densities of P. vivax malaria patients in this area would lead to an overestimation. For P. vivax patients aged five years and older, an assumed WBC count of 5500/μl best estimated parasite densities. This study provides more realistic assumed WBC counts for estimating parasite densities in P. vivax patients from low-endemicity areas of Southeast Asia. PMID:26802490

  10. Similarities between Line Fishing and Baited Stereo-Video Estimations of Length-Frequency: Novel Application of Kernel Density Estimates

    PubMed Central

    Langlois, Timothy J.; Fitzpatrick, Benjamin R.; Fairclough, David V.; Wakefield, Corey B.; Hesp, S. Alex; McLean, Dianne L.; Harvey, Euan S.; Meeuwig, Jessica J.

    2012-01-01

    Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov–Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID

  11. Similarities between line fishing and baited stereo-video estimations of length-frequency: novel application of Kernel Density Estimates.

    PubMed

    Langlois, Timothy J; Fitzpatrick, Benjamin R; Fairclough, David V; Wakefield, Corey B; Hesp, S Alex; McLean, Dianne L; Harvey, Euan S; Meeuwig, Jessica J

    2012-01-01

    Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov-Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID

  12. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  13. Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers

    NASA Astrophysics Data System (ADS)

    Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.

    2013-12-01

    The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.

  14. Wavelet-based multifractal analysis of earthquakes temporal distribution in Mammoth Mountain volcano, Mono County, Eastern California

    NASA Astrophysics Data System (ADS)

    Zamani, Ahmad; Kolahi Azar, Amir; Safavi, Ali

    2014-06-01

    This paper presents a wavelet-based multifractal approach to characterize the statistical properties of temporal distribution of the 1982-2012 seismic activity in Mammoth Mountain volcano. The fractal analysis of time-occurrence series of seismicity has been carried out in relation to seismic swarm in association with magmatic intrusion happening beneath the volcano on 4 May 1989. We used the wavelet transform modulus maxima based multifractal formalism to get the multifractal characteristics of seismicity before, during, and after the unrest. The results revealed that the earthquake sequences across the study area show time-scaling features. It is clearly perceived that the multifractal characteristics are not constant in different periods and there are differences among the seismicity sequences. The attributes of singularity spectrum have been utilized to determine the complexity of seismicity for each period. Findings show that the temporal distribution of earthquakes for swarm period was simpler with respect to pre- and post-swarm periods.

  15. A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

    NASA Astrophysics Data System (ADS)

    Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

    2014-09-01

    This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

  16. Density and Biomass Estimates by Removal for an Amazonian Crocodilian, Paleosuchus palpebrosus

    PubMed Central

    2016-01-01

    Direct counts of crocodilians are rarely feasible and it is difficult to meet the assumptions of mark-recapture methods for most species in most habitats. Catch-out experiments are also usually not logistically or morally justifiable because it would be necessary to destroy the habitat in order to be confident that most individuals had been captured. We took advantage of the draining and filling of a large area of flooded forest during the building of the Santo Antônio dam on the Madeira River to obtain accurate estimates of the density and biomass of Paleosuchus palpebrosus. The density, 28.4 non-hatchling individuals per km2, is one of the highest reported for any crocodilian, except for species that are temporarily concentrated in small areas during dry-season drought. The biomass estimate of 63.15 kg*km-2 is higher than that for most or even all mammalian carnivores in tropical forest. P. palpebrosus may be one of the World´s most abundant crocodilians. PMID:27224473

  17. Density and Biomass Estimates by Removal for an Amazonian Crocodilian, Paleosuchus palpebrosus.

    PubMed

    Campos, Zilca; Magnusson, William E

    2016-01-01

    Direct counts of crocodilians are rarely feasible and it is difficult to meet the assumptions of mark-recapture methods for most species in most habitats. Catch-out experiments are also usually not logistically or morally justifiable because it would be necessary to destroy the habitat in order to be confident that most individuals had been captured. We took advantage of the draining and filling of a large area of flooded forest during the building of the Santo Antônio dam on the Madeira River to obtain accurate estimates of the density and biomass of Paleosuchus palpebrosus. The density, 28.4 non-hatchling individuals per km2, is one of the highest reported for any crocodilian, except for species that are temporarily concentrated in small areas during dry-season drought. The biomass estimate of 63.15 kg*km-2 is higher than that for most or even all mammalian carnivores in tropical forest. P. palpebrosus may be one of the World´s most abundant crocodilians. PMID:27224473

  18. Power spectral density of velocity fluctuations estimated from phase Doppler data

    NASA Astrophysics Data System (ADS)

    Jedelsky, Jan; Lizal, Frantisek; Jicha, Miroslav

    2012-04-01

    Laser Doppler Anemometry (LDA) and its modifications such as PhaseDoppler Particle Anemometry (P/DPA) is point-wise method for optical nonintrusive measurement of particle velocity with high data rate. Conversion of the LDA velocity data from temporal to frequency domain - calculation of power spectral density (PSD) of velocity fluctuations, is a non trivial task due to nonequidistant data sampling in time. We briefly discuss possibilities for the PSD estimation and specify limitations caused by seeding density and other factors of the flow and LDA setup. Arbitrary results of LDA measurements are compared with corresponding Hot Wire Anemometry (HWA) data in the frequency domain. Slot correlation (SC) method implemented in software program Kern by Nobach (2006) is used for the PSD estimation. Influence of several input parameters on resulting PSDs is described. Optimum setup of the software for our data of particle-laden air flow in realistic human airway model is documented. Typical character of the flow is described using PSD plots of velocity fluctuations with comments on specific properties of the flow. Some recommendations for improvements of future experiments to acquire better PSD results are given.

  19. Transverse energy scaling and energy density estimates from sup 16 O- and sup 32 S-induced reactions

    SciTech Connect

    Not Available

    1989-01-01

    We discuss the dependence of transverse energy production on projectile mass, target mass, and on the impact parameter of the heavy ion reaction. The transverse energy is shown to scale with the number of participating nucleons. Various methods to estimate the attained energy density from the observed transverse energy are discussed. It is shown that the systematics of the energy density estimates suggest averages of 2--3 GeV/fm{sup 3} rather than the much higher values attained by assuming Landau-stopping initial conditions. Based on the observed scaling of the transverse energy, an initial energy density profile may be estimated. 14 refs., 4 figs.

  20. Transverse energy scaling and energy density estimates from /sup 16/O- and /sup 32/S-induced reactions

    SciTech Connect

    Awes, T.C.; Albrecht, R.; Baktash, C.; Beckmann, P.; Berger, F.; Bock, R.; Claesson, G.; Clewing, G.; Dragon, L.; Eklund, A.

    1989-01-01

    We discuss the dependence of transverse energy production on projectile mass, target mass, and on the impact parameter of the heavy ion reaction. The transverse energy is shown to scale with the number of participating nucleons. Various methods to estimate the attained energy density from the observed transverse energy are discussed. It is shown that the systematics of the energy density estimates suggest average of 2-3 GeV/fm/sup 3/ rather than the much higher values attained by assuming Landau-stopping initial conditions. Based on the observed scaling of the transverse energy, an initial energy density profile may be estimated. 11 refs., 4 figs.

  1. Accuracy of estimated geometric parameters of trees depending on the LIDAR data density

    NASA Astrophysics Data System (ADS)

    Hadas, Edyta; Estornell, Javier

    2015-04-01

    The estimation of dendrometric variables has become important for spatial planning and agriculture projects. Because classical field measurements are time consuming and inefficient, airborne LiDAR (Light Detection and Ranging) measurements are successfully used in this area. Point clouds acquired for relatively large areas allows to determine the structure of forestry and agriculture areas and geometrical parameters of individual trees. In this study two LiDAR datasets with different densities were used: sparse with average density of 0.5pt/m2 and the dense with density of 4pt/m2. 25 olive trees were selected and field measurements of tree height, crown bottom height, length of crown diameters and tree position were performed. To determine the tree geometric parameters from LiDAR data, two independent strategies were developed that utilize the ArcGIS, ENVI and FUSION software. Strategy a) was based on canopy surface model (CSM) slicing at 0.5m height and in strategy b) minimum bounding polygons as tree crown area were created around detected tree centroid. The individual steps were developed to be applied also in automatic processing. To assess the performance of each strategy with both point clouds, the differences between the measured and estimated geometric parameters of trees were analyzed. As expected, the tree height were underestimated for both strategies (RMSE=0.7m for dense dataset and RMSE=1.5m for sparse) and tree crown height were overestimated (RMSE=0.4m and RMSE=0.7m for dense and sparse dataset respectively). For dense dataset, strategy b) allows to determine more accurate crown diameters (RMSE=0.5m) than strategy a) (RMSE=0.8m), and for sparse dataset, only strategy a) occurs to be relevant (RMSE=1.0m). The accuracy of strategies were also examined for their dependency on tree size. For dense dataset, the larger the tree (height or crown longer diameter), the higher was the error of estimated tree height, and for sparse dataset, the larger the tree

  2. A volumetric method for estimation of breast density on digitized screen-film mammograms.

    PubMed

    Pawluczyk, Olga; Augustine, Bindu J; Yaffe, Martin J; Rico, Dan; Yang, Jiwei; Mawdsley, Gordon E; Boyd, Norman F

    2003-03-01

    A method is described for the quantitative volumetric analysis of the mammographic density (VBD) from digitized screen-film mammograms. The method is based on initial calibration of the imaging system with a tissue-equivalent plastic device and the subsequent correction for variations in exposure factors and film processing characteristics through images of an aluminum step wedge placed adjacent to the breast during imaging. From information about the compressed breast thickness and technique factors used for taking the mammogram as well as the information from the calibration device, VBD is calculated. First, optical sensitometry is used to convert images to Log relative exposure. Second, the images are corrected for x-ray field inhomogeneity using a spherical section PMMA phantom image. The effectiveness of using the aluminum step wedge in tracking down the variations in exposure factors and film processing was tested by taking test images of the calibration device, aluminum step wedge and known density phantoms at various exposure conditions and also at different times over one year. Results obtained on known density phantoms show that VBD can be estimated to within 5% accuracy from the actual value. A first order thickness correction is employed to correct for inaccuracy in the compression thickness indicator of the mammography units. Clinical studies are ongoing to evaluate whether VBD can be a better indicator for breast cancer risk. PMID:12674236

  3. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data. PMID:27410085

  4. Comparison of breast percent density estimation from raw versus processed digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  5. How Does Spatial Study Design Influence Density Estimates from Spatial Capture-Recapture Models?

    PubMed Central

    Sollmann, Rahel; Gardner, Beth; Belant, Jerrold L.

    2012-01-01

    When estimating population density from data collected on non-invasive detector arrays, recently developed spatial capture-recapture (SCR) models present an advance over non-spatial models by accounting for individual movement. While these models should be more robust to changes in trapping designs, they have not been well tested. Here we investigate how the spatial arrangement and size of the trapping array influence parameter estimates for SCR models. We analysed black bear data collected with 123 hair snares with an SCR model accounting for differences in detection and movement between sexes and across the trapping occasions. To see how the size of the trap array and trap dispersion influence parameter estimates, we repeated analysis for data from subsets of traps: 50% chosen at random, 50% in the centre of the array and 20% in the South of the array. Additionally, we simulated and analysed data under a suite of trap designs and home range sizes. In the black bear study, we found that results were similar across trap arrays, except when only 20% of the array was used. Black bear density was approximately 10 individuals per 100 km2. Our simulation study showed that SCR models performed well as long as the extent of the trap array was similar to or larger than the extent of individual movement during the study period, and movement was at least half the distance between traps. SCR models performed well across a range of spatial trap setups and animal movements. Contrary to non-spatial capture-recapture models, they do not require the trapping grid to cover an area several times the average home range of the studied species. This renders SCR models more appropriate for the study of wide-ranging mammals and more flexible to design studies targeting multiple species. PMID:22539949

  6. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data

    USGS Publications Warehouse

    Dorazio, Robert M.

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

  7. Integration of Self-Organizing Map (SOM) and Kernel Density Estimation (KDE) for network intrusion detection

    NASA Astrophysics Data System (ADS)

    Cao, Yuan; He, Haibo; Man, Hong; Shen, Xiaoping

    2009-09-01

    This paper proposes an approach to integrate the self-organizing map (SOM) and kernel density estimation (KDE) techniques for the anomaly-based network intrusion detection (ABNID) system to monitor the network traffic and capture potential abnormal behaviors. With the continuous development of network technology, information security has become a major concern for the cyber system research. In the modern net-centric and tactical warfare networks, the situation is more critical to provide real-time protection for the availability, confidentiality, and integrity of the networked information. To this end, in this work we propose to explore the learning capabilities of SOM, and integrate it with KDE for the network intrusion detection. KDE is used to estimate the distributions of the observed random variables that describe the network system and determine whether the network traffic is normal or abnormal. Meanwhile, the learning and clustering capabilities of SOM are employed to obtain well-defined data clusters to reduce the computational cost of the KDE. The principle of learning in SOM is to self-organize the network of neurons to seek similar properties for certain input patterns. Therefore, SOM can form an approximation of the distribution of input space in a compact fashion, reduce the number of terms in a kernel density estimator, and thus improve the efficiency for the intrusion detection. We test the proposed algorithm over the real-world data sets obtained from the Integrated Network Based Ohio University's Network Detective Service (INBOUNDS) system to show the effectiveness and efficiency of this method.

  8. A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti

    PubMed Central

    Villela, Daniel A. M.; Codeço, Claudia T.; Figueiredo, Felipe; Garcia, Gabriela A.; Maciel-de-Freitas, Rafael; Struchiner, Claudio J.

    2015-01-01

    Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals’ refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80–1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3–5.9). The hierarchical model also performed better than the commonly used Fisher-Ford’s method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and

  9. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation

  10. Probability density function estimation for characterizing hourly variability of ionospheric total electron content

    NASA Astrophysics Data System (ADS)

    Turel, N.; Arikan, F.

    2010-12-01

    Ionospheric channel characterization is an important task for both HF and satellite communications. The inherent space-time variability of the ionosphere can be observed through total electron content (TEC) that can be obtained using GPS receivers. In this study, within-the-hour variability of the ionosphere over high-latitude, midlatitude, and equatorial regions is investigated by estimating a parametric model for the probability density function (PDF) of GPS-TEC. PDF is a useful tool in defining the statistical structure of communication channels. For this study, a half solar cycle data is collected for 18 GPS stations. Histograms of TEC, corresponding to experimental probability distributions, are used to estimate the parameters of five different PDFs. The best fitting distribution to the TEC data is obtained using the maximum likelihood ratio of the estimated parametric distributions. It is observed that all of the midlatitude stations and most of the high-latitude and equatorial stations are distributed as lognormal. A representative distribution can easily be obtained for stations that are located in midlatitude using solar zenith normalization. The stations located in very high latitudes or in equatorial regions cannot be described using only one PDF distribution. Due to significant seasonal variability, different distributions are required for summer and winter.

  11. Critical current densities estimated from AC susceptibilities in proximity-induced superconducting matrix of multifilamentary wire

    NASA Astrophysics Data System (ADS)

    Akune, Tadahiro; Sakamoto, Nobuyoshi

    2009-03-01

    In a multifilamentary wire proximity-currents between filaments show a close resemblance with the inter-grain current in a high-Tc superconductor. The critical current densities of the proximity-induced superconducting matrix Jcm can be estimated from measured twist-pitch dependence of magnetization and have been shown to follow the well-known scaling law of the pinning strength. The grained Bean model is applied on the multifilamentary wire to obtain Jcm, where the filaments are immersed in the proximity-induced superconducting matrix. Difference of the superconducting characteristics of the filament, the matrix and the filament content factor give a variety of deformation on the AC susceptibility curves. The computed AC susceptibility curves of multifilamentary wires using the grained Bean model are favorably compared with the experimental results. The values of Jcm estimated from the susceptibilities using the grained Bean model are comparable to those estimated from measured twist-pitch dependence of magnetization. The applicability of the grained Bean model on the multifilamentary wire is discussed in detail.

  12. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  13. Estimating respiratory and heart rates from the correntropy spectral density of the photoplethysmogram.

    PubMed

    Garde, Ainara; Karlen, Walter; Ansermino, J Mark; Dumont, Guy A

    2014-01-01

    The photoplethysmogram (PPG) obtained from pulse oximetry measures local variations of blood volume in tissues, reflecting the peripheral pulse modulated by heart activity, respiration and other physiological effects. We propose an algorithm based on the correntropy spectral density (CSD) as a novel way to estimate respiratory rate (RR) and heart rate (HR) from the PPG. Time-varying CSD, a technique particularly well-suited for modulated signal patterns, is applied to the PPG. The respiratory and cardiac frequency peaks detected at extended respiratory (8 to 60 breaths/min) and cardiac (30 to 180 beats/min) frequency bands provide RR and HR estimations. The CSD-based algorithm was tested against the Capnobase benchmark dataset, a dataset from 42 subjects containing PPG and capnometric signals and expert labeled reference RR and HR. The RR and HR estimation accuracy was assessed using the unnormalized root mean square (RMS) error. We investigated two window sizes (60 and 120 s) on the Capnobase calibration dataset to explore the time resolution of the CSD-based algorithm. A longer window decreases the RR error, for 120-s windows, the median RMS error (quartiles) obtained for RR was 0.95 (0.27, 6.20) breaths/min and for HR was 0.76 (0.34, 1.45) beats/min. Our experiments show that in addition to a high degree of accuracy and robustness, the CSD facilitates simultaneous and efficient estimation of RR and HR. Providing RR every minute, expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool. PMID:24466088

  14. Coronal electron density distributions estimated from CMEs, DH type II radio bursts, and polarized brightness measurements

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Ok; Moon, Y.-J.; Lee, Jin-Yi; Lee, Kyoung-Sun; Kim, R.-S.

    2016-04-01

    We determine coronal electron density distributions (CEDDs) by analyzing decahectometric (DH) type II observations under two assumptions. DH type II bursts are generated by either (1) shocks at the leading edges of coronal mass ejections (CMEs) or (2) CME shock-streamer interactions. Among 399 Wind/WAVES type II bursts (from 1997 to 2012) associated with SOHO/LASCO (Large Angle Spectroscopic COronagraph) CMEs, we select 11 limb events whose fundamental and second harmonic emission lanes are well identified. We determine the lowest frequencies of fundamental emission lanes and the heights of leading edges of their associated CMEs. We also determine the heights of CME shock-streamer interaction regions. The CEDDs are estimated by minimizing the root-mean-square error between the heights from the CME leading edges (or CME shock-streamer interaction regions) and DH type II bursts. We also estimate CEDDs of seven events using polarized brightness (pB) measurements. We find the following results. Under the first assumption, the average of estimated CEDDs from 3 to 20 Rs is about 5-fold Saito's model (NSaito(r)). Under the second assumption, the average of estimated CEDDs from 3 to 10 Rs is 1.5-fold NSaito(r). While the CEDDs obtained from pB measurements are significantly smaller than those based on the first assumption and CME flank regions without streamers, they are well consistent with those on the second assumption. Our results show that not only about 1-fold NSaito(r) is a proper CEDD for analyzing DH type II bursts but also CME shock-streamer interactions could be a plausible origin for generating DH type II bursts.

  15. Estimation of effective scatterer size and number density in near-infrared tomography

    NASA Astrophysics Data System (ADS)

    Wang, Xin

    2007-05-01

    Light scattering from tissue originates from the fluctuations in intra-cellular and extra-cellular components, so it is possible that macroscopic scattering spectroscopy could be used to quantify sub-microscopic structures. Both electron microscopy (EM) and optical phase contrast microscopy were used to study the origin of scattering from tissue. EM studies indicate that lipid-bound particle sizes appear to be distributed as a monotonic exponential function, with sub-micron structures dominating the distribution. Given assumptions about the index of refraction change, the shape of the scattering spectrum in the near infrared as measured through bulk tissue is consistent with what would be predicted by Mie theory with these particle size histograms. The relative scattering intensity of breast tissue sections (including 10 normal & 23 abnormal) were studied by phase contrast microscopy. Results show that stroma has higher scattering than epithelium tissue, and fat has the lowest values; tumor epithelium has lower scattering than the normal epithelium; stroma associated with tumor has lower scattering than the normal stroma. Mie theory estimation scattering spectra, was used to estimate effective particle size values, and this was applied retrospectively to normal whole breast spectra accumulated in ongoing clinical exams. The effective sizes ranged between 20 and 1400 nm, which are consistent with subcellular organelles and collagen matrix fibrils discussed previously. This estimation method was also applied to images from cancer regions, with results indicating that the effective scatterer sizes of region of interest (ROI) are pretty close to that of the background for both the cancer patients and benign patients; for the effective number density, there is a big difference between the ROI and background for the cancer patients, while for the benign patients, the value of ROI are relatively close to that of the background. Ongoing MRI-guided NIR studies indicated

  16. Simulation of Electron Cloud Density Distributions in RHIC Dipoles at Injection and Transition and Estimates for Scrubbing Times

    SciTech Connect

    He,P.; Blaskiewicz, M.; Fischer, W.

    2009-01-02

    In this report we summarize electron-cloud simulations for the RHIC dipole regions at injection and transition to estimate if scrubbing over practical time scales at injection would reduce the electron cloud density at transition to significantly lower values. The lower electron cloud density at transition will allow for an increase in the ion intensity.

  17. Feasibility of hydrogen density estimation from tomographic sensing of Lyman alpha emission

    NASA Astrophysics Data System (ADS)

    Waldrop, L.; Kamalabadi, F.; Ren, D.

    2015-12-01

    In this work, we describe the scientific motivation, basic principles, and feasibility of a new approach to the estimation of neutral hydrogen (H) density in the terrestrial exosphere based on the 3-D tomographic sensing of optically thin H emission at 121.6 nm (Lyman alpha). In contrast to existing techniques, Lyman alpha tomography allows for model-independent reconstruction of the underlying H distribution in support of investigations regarding the origin and time-dependent evolution of exospheric structure. We quantitatively describe the trade-off space between the measurement sampling rate, viewing geometry, and the spatial and temporal resolution of the reconstruction that is supported by the data. We demonstrate that this approach is feasible from either earth-orbiting satellites such as the stereoscopic NASA TWINS mission or from a CubeSat platform along a trans-exosphere trajectory such as that enabled by the upcoming Exploration Mission 1 launch.

  18. Daniell method for power spectral density estimation in atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Labuda, Aleksander

    2016-03-01

    An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to a more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.

  19. Classification of motor imagery by means of cortical current density estimation and Von Neumann entropy.

    PubMed

    Kamousi, Baharan; Amini, Ali Nasiri; He, Bin

    2007-06-01

    The goal of the present study is to employ the source imaging methods such as cortical current density estimation for the classification of left- and right-hand motor imagery tasks, which may be used for brain-computer interface (BCI) applications. The scalp recorded EEG was first preprocessed by surface Laplacian filtering, time-frequency filtering, noise normalization and independent component analysis. Then the cortical imaging technique was used to solve the EEG inverse problem. Cortical current density distributions of left and right trials were classified from each other by exploiting the concept of Von Neumann entropy. The proposed method was tested on three human subjects (180 trials each) and a maximum accuracy of 91.5% and an average accuracy of 88% were obtained. The present results confirm the hypothesis that source analysis methods may improve accuracy for classification of motor imagery tasks. The present promising results using source analysis for classification of motor imagery enhances our ability of performing source analysis from single trial EEG data recorded on the scalp, and may have applications to improved BCI systems. PMID:17409476

  20. Density estimates of Panamanian owl monkeys (Aotus zonalis) in three habitat types.

    PubMed

    Svensson, Magdalena S; Samudio, Rafael; Bearder, Simon K; Nekaris, K Anne-Isola

    2010-02-01

    The resolution of the ambiguity surrounding the taxonomy of Aotus means data on newly classified species are urgently needed for conservation efforts. We conducted a study on the Panamanian owl monkey (Aotus zonalis) between May and July 2008 at three localities in Chagres National Park, located east of the Panama Canal, using the line transect method to quantify abundance and distribution. Vegetation surveys were also conducted to provide a baseline quantification of the three habitat types. We observed 33 individuals within 16 groups in two out of the three sites. Population density was highest in Campo Chagres with 19.7 individuals/km(2) and intermediate densities of 14.3 individuals/km(2) were observed at Cerro Azul. In la Llana A. zonalis was not found to be present. The presence of A. zonalis in Chagres National Park, albeit at seemingly low abundance, is encouraging. A longer-term study will be necessary to validate the further abundance estimates gained in this pilot study in order to make conservation policy decisions. PMID:19852005

  1. Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?

    SciTech Connect

    Bandyopadhyay, M. Sudhir, Dass Chakraborty, A.

    2015-04-08

    To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.

  2. Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion

    NASA Technical Reports Server (NTRS)

    Wilson, L.; Self, S.

    1980-01-01

    Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.

  3. Density estimation in aerial images of large crowds for automatic people counting

    NASA Astrophysics Data System (ADS)

    Herrmann, Christian; Metzler, Juergen

    2013-05-01

    Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.

  4. Robust estimation of mammographic breast density: a patient-based approach

    NASA Astrophysics Data System (ADS)

    Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas

    2012-02-01

    Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).

  5. Evaluation of a brushing machine for estimating density of spider mites on grape leaves.

    PubMed

    Macmillan, Craig D; Costello, Michael J

    2015-12-01

    Direct visual inspection and enumeration for estimating field population density of economically important arthropods, such as spider mites, provide more information than alternative methods, such as binomial sampling, but is laborious and time consuming. A brushing machine can reduce sampling time and perhaps improve accuracy. Although brushing technology has been investigated and recommended as a useful tool for researchers and integrated pest management practitioners, little work to demonstrate the validity of this technique has been performed since the 1950's. We investigated the brushing machine manufactured by Leedom Enterprises (Mi-Wuk Village, CA, USA) for studies on spider mites. We evaluated (1) the mite recovery efficiency relative to the number of passes of a leaf through the brushes, (2) mite counts as generated by the machine compared to visual counts under a microscope, (3) the lateral distribution of mites on the collection plate and (4) the accuracy and precision of a 10% sub-sample using a double-transect counting grid. We found that about 95% of mites on a leaf were recovered after five passes, and 99% after nine passes, and mite counts from brushing were consistently higher than those from visual inspection. Lateral distribution of mites was not uniform, being highest in concentration at the center and lowest at the periphery. The 10% double-transect pattern did not result in a significant correlation with the total plate count at low mite density, but accuracy and precision improved at medium and high density. We suggest that a more accurate and precise sample may be achieved using a modified pattern which concentrates on the center plus some of the adjacent area. PMID:26459377

  6. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  7. Estimation of membrane diffusion coefficients and equilibration times for low-density polyethylene passive diffusion samplers.

    PubMed

    Divine, Craig E; McCray, John E

    2004-03-15

    Passive diffusion (PD) samplers offer several potential technical and cost-related advantages, particularly for measuring dissolved gases and volatile organic compounds (VOCs) in groundwater at contaminated sites. Sampler equilibration is a diffusion-type process; therefore, equilibration time is dependent on sampler dimensions, membrane thickness, and the temperature-dependent membrane diffusion coefficient (Dm) for the analyte of interest. Diffusion coefficients for low-density polyethylene membranes were measured for He, Ne, H2, O2, and N2 in laboratory experiments and ranged from 1.1 to 1.9 x 10(-7) cm2 sec(-1) (21 degrees C). Additionally, Dm values for several commonly occurring VOCs were estimated from empirical experimental data previously presented by others (Vroblesky, D. A.; Campbell, T. R. Adv. Environ. Res. 2001, 5(1), 1.), and estimated values ranged from 1.7 to 4.4 x 10(-7) cm2 sec(-1) (21 degrees C). On the basis of these Dm ranges, PD sampler equilibration time is predicted for various sampler dimensions, including dimensions consistent with simple constructed samplers used in this study and commercially available samplers. Additionally, a numerical model is presented that can be used to evaluate PD sampler concentration "lag time" for conditions in which in situ concentrations are temporally variable. The model adequately predicted lag time for laboratory experiments and is used to show that data obtained from appropriately designed PD samplers represent near-instantaneous measurement of in situ concentrations for most field conditions. PMID:15074699

  8. Using kernel density estimation to understand the influence of neighbourhood destinations on BMI

    PubMed Central

    King, Tania L; Bentley, Rebecca J; Thornton, Lukar E; Kavanagh, Anne M

    2016-01-01

    Objectives Little is known about how the distribution of destinations in the local neighbourhood is related to body mass index (BMI). Kernel density estimation (KDE) is a spatial analysis technique that accounts for the location of features relative to each other. Using KDE, this study investigated whether individuals living near destinations (shops and service facilities) that are more intensely distributed rather than dispersed, have lower BMIs. Study design and setting A cross-sectional study of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Methods Destinations were geocoded, and kernel density estimates of destination intensity were created using kernels of 400, 800 and 1200 m. Using multilevel linear regression, the association between destination intensity (classified in quintiles Q1(least)–Q5(most)) and BMI was estimated in models that adjusted for the following confounders: age, sex, country of birth, education, dominant household occupation, household type, disability/injury and area disadvantage. Separate models included a physical activity variable. Results For kernels of 800 and 1200 m, there was an inverse relationship between BMI and more intensely distributed destinations (compared to areas with least destination intensity). Effects were significant at 1200 m: Q4, β −0.86, 95% CI −1.58 to −0.13, p=0.022; Q5, β −1.03 95% CI −1.65 to −0.41, p=0.001. Inclusion of physical activity in the models attenuated effects, although effects remained marginally significant for Q5 at 1200 m: β −0.77 95% CI −1.52, −0.02, p=0.045. Conclusions This study conducted within urban Melbourne, Australia, found that participants living in areas of greater destination intensity within 1200 m of home had lower BMIs. Effects were partly explained by physical activity. The results suggest that increasing the intensity of destination distribution could reduce BMI levels by encouraging higher levels of physical activity

  9. Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

    2012-12-01

    We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia

  10. On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds

    NASA Astrophysics Data System (ADS)

    Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua

    2015-02-01

    in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.

  11. Applying a random encounter model to estimate lion density from camera traps in Serengeti National Park, Tanzania

    PubMed Central

    Cusack, Jeremy J; Swanson, Alexandra; Coulson, Tim; Packer, Craig; Carbone, Chris; Dickman, Amy J; Kosmala, Margaret; Lintott, Chris; Rowcliffe, J Marcus

    2015-01-01

    The random encounter model (REM) is a novel method for estimating animal density from camera trap data without the need for individual recognition. It has never been used to estimate the density of large carnivore species, despite these being the focus of most camera trap studies worldwide. In this context, we applied the REM to estimate the density of female lions (Panthera leo) from camera traps implemented in Serengeti National Park, Tanzania, comparing estimates to reference values derived from pride census data. More specifically, we attempted to account for bias resulting from non-random camera placement at lion resting sites under isolated trees by comparing estimates derived from night versus day photographs, between dry and wet seasons, and between habitats that differ in their amount of tree cover. Overall, we recorded 169 and 163 independent photographic events of female lions from 7,608 and 12,137 camera trap days carried out in the dry season of 2010 and the wet season of 2011, respectively. Although all REM models considered over-estimated female lion density, models that considered only night-time events resulted in estimates that were much less biased relative to those based on all photographic events. We conclude that restricting REM estimation to periods and habitats in which animal movement is more likely to be random with respect to cameras can help reduce bias in estimates of density for female Serengeti lions. We highlight that accurate REM estimates will nonetheless be dependent on reliable measures of average speed of animal movement and camera detection zone dimensions. © 2015 The Authors. Journal of Wildlife Management published by Wiley Periodicals, Inc. on behalf of The Wildlife Society. PMID:26640297

  12. Fecundity estimation by oocyte packing density formulae in determinate and indeterminate spawners: Theoretical considerations and applications

    NASA Astrophysics Data System (ADS)

    Kurita, Yutaka; Kjesbu, Olav S.

    2009-02-01

    This paper explores why the 'Auto-diametric method', currently used in many laboratories to quickly estimate fish fecundity, works well on marine species with a determinate reproductive style but much less so on species with an indeterminate reproductive style. Algorithms describing links between potentially important explanatory variables to estimate fecundity were first established, and these were followed by practical observations in order to validate the method under two extreme situations: 1) straightforward fecundity estimation in a determinate, single-batch spawner: Atlantic herring (AH) Clupea harengus and 2) challenging fecundity estimation in an indeterminate, multiple-batch spawner: Japanese flounder (JF) Paralichthys olivaceus. The Auto-diametric method relies on the successful prediction of the number of vitellogenic oocytes (VTO) per gram ovary (oocyte packing density; OPD) from the mean VTO diameter. Theoretically, OPD could be reproduced by the following four variables; OD V (volume-based mean VTO diameter, which deviates from arithmetic mean VTO diameter), VFvto (volume fraction of VTO in the ovary), ρo (specific gravity of the ovary) and k (VTO shape, i.e. ratio of long and short oocyte axes). VF vto, ρ o and k were tested in relation to growth in OD V. The dynamic range throughout maturation was clearly highest in VF vto. As a result, OPD was mainly influenced by OD V and secondly by VFvto. Log (OPD) for AH decreased as log (OD V) increased, while log (OPD) for JF first increased during early vitellogenesis, then decreased during late vitellogenesis and spawning as log (OD V) increased. These linear regressions thus behaved statistically differently between species, and associated residuals fluctuated more for JF than for AH. We conclude that the OPD-OD V relationship may be better expressed by several curves that cover different parts of the maturation cycle rather than by one curve that cover all these parts. This seems to be particularly

  13. X-Ray Methods to Estimate Breast Density Content in Breast Tissue

    NASA Astrophysics Data System (ADS)

    Maraghechi, Borna

    This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ≡ fibrous, lucite ≡ cancer and polyet ≡ fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating

  14. Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts

    NASA Technical Reports Server (NTRS)

    Grau, David

    2012-01-01

    This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure

  15. Experimental and theoretical analysis of wavelet-based denoising filter for echocardiographic images.

    PubMed

    Kang, S C; Hong, S H

    2001-01-01

    One of the most significant features of diagnostic echocardiographic images is to reduce speckle noise and make better image quality. In this paper we proposed a simple and effective filter design for image denoising and contrast enhancement based on multiscale wavelet denoising method. Wavelet threshold algorithms replace wavelet coefficients with small magnitude by zero and keep or shrink the other coefficients. This is basically a local procedure, since wavelet coefficients characterize the local regularity of a function. After we estimate distribution of noise within echocardiographic image, then apply to fitness Wavelet threshold algorithm. A common way of the estimating the speckle noise level in coherent imaging is to calculate the mean-to-standard-deviation ratio of the pixel intensity, often termed the Equivalent Number of Looks(ENL), over a uniform image area. Unfortunately, we found this measure not very robust mainly because of the difficulty to identify a uniform area in a real image. For this reason, we will only use here the S/MSE ratio and which corresponds to the standard SNR in case of additivie noise. We have simulated some echocardiographic images by specialized hardware for real-time application;processing of a 512*512 images takes about 1 min. Our experiments show that the optimal threshold level depends on the spectral content of the image. High spectral content tends to over-estimate the noise standard deviation estimation performed at the finest level of the DWT. As a result, a lower threshold parameter is required to get the optimal S/MSE. The standard WCS theory predicts a threshold that depends on the number of signal samples only. PMID:11604864

  16. Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites

    PubMed Central

    Mitchard, Edward T A; Feldpausch, Ted R; Brienen, Roel J W; Lopez-Gonzalez, Gabriela; Monteagudo, Abel; Baker, Timothy R; Lewis, Simon L; Lloyd, Jon; Quesada, Carlos A; Gloor, Manuel; ter Steege, Hans; Meir, Patrick; Alvarez, Esteban; Araujo-Murakami, Alejandro; Aragão, Luiz E O C; Arroyo, Luzmila; Aymard, Gerardo; Banki, Olaf; Bonal, Damien; Brown, Sandra; Brown, Foster I; Cerón, Carlos E; Chama Moscoso, Victor; Chave, Jerome; Comiskey, James A; Cornejo, Fernando; Corrales Medina, Massiel; Da Costa, Lola; Costa, Flavia R C; Di Fiore, Anthony; Domingues, Tomas F; Erwin, Terry L; Frederickson, Todd; Higuchi, Niro; Honorio Coronado, Euridice N; Killeen, Tim J; Laurance, William F; Levis, Carolina; Magnusson, William E; Marimon, Beatriz S; Marimon Junior, Ben Hur; Mendoza Polo, Irina; Mishra, Piyush; Nascimento, Marcelo T; Neill, David; Núñez Vargas, Mario P; Palacios, Walter A; Parada, Alexander; Pardo Molina, Guido; Peña-Claros, Marielos; Pitman, Nigel; Peres, Carlos A; Poorter, Lourens; Prieto, Adriana; Ramirez-Angulo, Hirma; Restrepo Correa, Zorayda; Roopsind, Anand; Roucoux, Katherine H; Rudas, Agustin; Salomão, Rafael P; Schietti, Juliana; Silveira, Marcos; de Souza, Priscila F; Steininger, Marc K; Stropp, Juliana; Terborgh, John; Thomas, Raquel; Toledo, Marisol; Torres-Lezama, Armando; van Andel, Tinde R; van der Heijden, Geertje M F; Vieira, Ima C G; Vieira, Simone; Vilanova-Torre, Emilio; Vos, Vincent A; Wang, Ophelia; Zartman, Charles E; Malhi, Yadvinder; Phillips, Oliver L

    2014-01-01

    Aim The accurate mapping of forest carbon stocks is essential for understanding the global carbon cycle, for assessing emissions from deforestation, and for rational land-use planning. Remote sensing (RS) is currently the key tool for this purpose, but RS does not estimate vegetation biomass directly, and thus may miss significant spatial variations in forest structure. We test the stated accuracy of pantropical carbon maps using a large independent field dataset. Location Tropical forests of the Amazon basin. The permanent archive of the field plot data can be accessed at: http://dx.doi.org/10.5521/FORESTPLOTS.NET/2014_1 Methods Two recent pantropical RS maps of vegetation carbon are compared to a unique ground-plot dataset, involving tree measurements in 413 large inventory plots located in nine countries. The RS maps were compared directly to field plots, and kriging of the field data was used to allow area-based comparisons. Results The two RS carbon maps fail to capture the main gradient in Amazon forest carbon detected using 413 ground plots, from the densely wooded tall forests of the north-east, to the light-wooded, shorter forests of the south-west. The differences between plots and RS maps far exceed the uncertainties given in these studies, with whole regions over- or under-estimated by > 25%, whereas regional uncertainties for the maps were reported to be < 5%. Main conclusions Pantropical biomass maps are widely used by governments and by projects aiming to reduce deforestation using carbon offsets, but may have significant regional biases. Carbon-mapping techniques must be revised to account for the known ecological variation in tree wood density and allometry to create maps suitable for carbon accounting. The use of single relationships between tree canopy height and above-ground biomass inevitably yields large, spatially correlated errors. This presents a significant challenge to both the forest conservation and remote sensing communities

  17. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    PubMed

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems. PMID:25953609

  18. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter

    2016-03-01

    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  19. Detection of Dendritic Spines Using Wavelet-Based Conditional Symmetric Analysis and Regularized Morphological Shared-Weight Neural Networks

    PubMed Central

    Wang, Shuihua; Chen, Mengmeng; Li, Yang; Zhang, Yudong; Han, Liangxiu; Wu, Jane; Du, Sidan

    2015-01-01

    Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer's disease, Parkinson's diseases, and autism). In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN) for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby). We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines. PMID:26692046

  20. Analysis of hydrological trend for radioactivity content in bore-hole water samples using wavelet based denoising.

    PubMed

    Paul, Sabyasachi; Suman, V; Sarkar, P K; Ranade, A K; Pulhani, V; Dafauti, S; Datta, D

    2013-08-01

    A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in (137)Cs and (90)Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann-Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements. PMID:23524202

  1. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  2. Effect of sampling density and design on estimation of streambed attributes

    NASA Astrophysics Data System (ADS)

    Kennedy, Casey D.; Genereux, David P.; Mitasova, Helena; Corbett, D. Reide; Leahy, Scott

    2008-06-01

    effect of "diminishing returns" was evident for sampling densities greater than ∼24 points per reach (roughly 0.05-0.06 points per m2 of streambed). Relative to sampling density, sampling design had little effect on values of p. Average error in streambed attributes was generally small (⩽10%) and less than the 95% confidence limits about the reach-average values of the attributes. The ability to estimate unknown point values by interpolation among other point values was poor using both 12- and 36-point subsets, though results suggest the 24 additional known point values (in going from 12 to 36) were helpful in one case in which there was some degree of autocorrelation between the additional known values and the values to be predicted in the interpolation. Visual inspection of 130 contour maps showed that those based on 36-point values were far more realistic in appearance than those based on 12-point values (where the standard for "realistic" appearance was the contour maps based on the full datasets of 54-point values).

  3. The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions

    USGS Publications Warehouse

    Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

    2002-01-01

    The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

  4. Estimated uncertainty of calculated liquefied natural gas density from a comparison of NBS and Gaz de France densimeter test facilities

    SciTech Connect

    Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.

    1982-12-16

    Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.

  5. Estimation of boiling points using density functional theory with polarized continuum model solvent corrections.

    PubMed

    Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C

    2011-09-01

    An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. PMID:21798775

  6. Novelty detection by multivariate kernel density estimation and growing neural gas algorithm

    NASA Astrophysics Data System (ADS)

    Fink, Olga; Zio, Enrico; Weidmann, Ulrich

    2015-01-01

    One of the underlying assumptions when using data-based methods for pattern recognition in diagnostics or prognostics is that the selected data sample used to train and test the algorithm is representative of the entire dataset and covers all combinations of parameters and conditions, and resulting system states. However in practice, operating and environmental conditions may change, unexpected and previously unanticipated events may occur and corresponding new anomalous patterns develop. Therefore for practical applications, techniques are required to detect novelties in patterns and give confidence to the user on the validity of the performed diagnosis and predictions. In this paper, the application of two types of novelty detection approaches is compared: a statistical approach based on multivariate kernel density estimation and an approach based on a type of unsupervised artificial neural network, called the growing neural gas (GNG). The comparison is performed on a case study in the field of railway turnout systems. Both approaches demonstrate their suitability for detecting novel patterns. Furthermore, GNG proves to be more flexible, especially with respect to dimensionality of the input data and suitability for online learning.

  7. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  8. Exploration of diffusion kernel density estimation in agricultural drought risk analysis: a case study in Shandong, China

    NASA Astrophysics Data System (ADS)

    Chen, W.; Shao, Z.; Tiong, L. K.

    2015-11-01

    Drought caused the most widespread damage in China, making up over 50 % of the total affected area nationwide in recent decades. In the paper, a Standardized Precipitation Index-based (SPI-based) drought risk study is conducted using historical rainfall data of 19 weather stations in Shandong province, China. Kernel density based method is adopted to carry out the risk analysis. Comparison between the bivariate Gaussian kernel density estimation (GKDE) and diffusion kernel density estimation (DKDE) are carried out to analyze the effect of drought intensity and drought duration. The results show that DKDE is relatively more accurate without boundary-leakage. Combined with the GIS technique, the drought risk is presented which reveals the spatial and temporal variation of agricultural droughts for corn in Shandong. The estimation provides a different way to study the occurrence frequency and severity of drought risk from multiple perspectives.

  9. The EM Method in a Probabilistic Wavelet-Based MRI Denoising.

    PubMed

    Martin-Fernandez, Marcos; Villullas, Sergio

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  10. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  11. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest

    PubMed Central

    Hearn, Andrew J.; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T. B.; Macdonald, David W.

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219

  12. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest.

    PubMed

    Hearn, Andrew J; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T B; Macdonald, David W

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219

  13. A Wavelet-based Seismogram Inversion Algorithm for the In Situ Characterization of Nonlinear Soil Behavior

    NASA Astrophysics Data System (ADS)

    Assimaki, D.; Li, W.; Kalos, A.

    2011-10-01

    We present a full waveform inversion algorithm of downhole array seismogram recordings that can be used to estimate the inelastic soil behavior in situ during earthquake ground motion. For this purpose, we first develop a new hysteretic scheme that improves upon existing nonlinear site response models by allowing adjustment of the width and length of the hysteresis loop for a relatively small number of soil parameters. The constitutive law is formulated to approximate the response of saturated cohesive materials, and does not account for volumetric changes due to shear leading to pore pressure development and potential liquefaction. We implement the soil model in the forward operator of the inversion, and evaluate the constitutive parameters that maximize the cross-correlation between site response predictions and observations on ground surface. The objective function is defined in the wavelet domain, which allows equal weight to be assigned across all frequency bands of the non-stationary signal. We evaluate the convergence rate and robustness of the proposed scheme for noise-free and noise-contaminated data, and illustrate good performance of the inversion for signal-to-noise ratios as low as 3. We finally employ the proposed scheme to downhole array data, and show that results compare very well with published data on generic soil conditions and previous geotechnical investigation studies at the array site. By assuming a realistic hysteretic model and estimating the constitutive soil parameters, the proposed inversion accounts for the instantaneous adjustment of soil response to the level and strain and load path during transient loading, and allows results to be used in predictions of nonlinear site effects during future events.

  14. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  15. Estimating Synaphobranchus kaupii densities: Contribution of fish behaviour to differences between bait experiments and visual strip transects

    NASA Astrophysics Data System (ADS)

    Trenkel, Verena M.; Lorance, Pascal

    2011-01-01

    Kaup's arrowtooth eel Synaphobranchus kaupii is a small-bodied fish and an abundant inhabitant of the European continental slope. To estimate its local density video information using the remotely operated vehicle (ROV) Victor 6000 were collected at three locations in the Bay of Biscay slope. Two methods for estimating local densities were tested: strip transect counts and bait experiments. For bait experiments three behaviour types were observed in about equal proportions for individuals arriving near the seafloor: moving up the current towards the ROV, moving across the current and drifting with the current. Visible attraction towards the bait was the highest for individuals swimming against the current (80%) and about equally low for the other two types (around 30%); it did not depend on current speed nor temperature. Three main innovations were introduced for estimating population densities from bait experiments: (i) inclusion of an additional behaviour category—that of passively drifting individuals, (ii) inclusion of reaction behaviour for actively swimming individuals and (iii) a hierarchical Bayesian estimation framework. The results indicated that about half of individuals were foraging actively of which less than one third reacted on encountering the bait plume and the other half were drifting with the current. Taking account of drifting individuals and the reaction probability made density estimates from bait experiments and strip transects more similar.

  16. Estimation of tiger densities in the tropical dry forests of Panna, Central India, using photographic capture-recapture sampling

    USGS Publications Warehouse

    Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.

    2004-01-01

    Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.

  17. Estimation of ocelot density in the pantanal using capture-recapture analysis of camera-trapping data

    USGS Publications Warehouse

    Trolle, M.; Kery, M.

    2003-01-01

    Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).

  18. A wavelet-based method to exploit epigenomic language in the regulatory region

    PubMed Central

    Nguyen, Nha; Vo, An; Won, Kyoung-Jae

    2014-01-01

    Motivation: Epigenetic landscapes in the regulatory regions reflect binding condition of transcription factors and their co-factors. Identifying epigenetic condition and its variation is important in understanding condition-specific gene regulation. Computational approaches to explore complex multi-dimensional landscapes are needed. Results: To study epigenomic condition for gene regulation, we developed a method, AWNFR, to classify epigenomic landscapes based on the detected epigenomic landscapes. Assuming mixture of Gaussians for a nucleosome, the proposed method captures the shape of histone modification and identifies potential regulatory regions in the wavelet domain. For accuracy estimation as well as enhanced computational speed, we developed a novel algorithm based on down-sampling operation and footprint in wavelet. We showed the algorithmic advantages of AWNFR using the simulated data. AWNFR identified regulatory regions more effectively and accurately than the previous approaches with the epigenome data in mouse embryonic stem cells and human lung fibroblast cells (IMR90). Based on the detected epigenomic landscapes, AWNFR classified epigenomic status and studied epigenomic codes. We studied co-occurring histone marks and showed that AWNFR captures the epigenomic variation across time. Availability and implementation: The source code and supplemental document of AWNFR are available at http://wonk.med.upenn.edu/AWNFR. Contact: wonk@mail.med.upenn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24096080

  19. Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Simons, F. J.; Bozdag, E.

    2014-12-01

    We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.

  20. Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity

    PubMed Central

    Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry

    2011-01-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD <2.0 Å), the DPM-HMM method performs as well or better than the best templates, demonstrating that our automated method recaptures these canonical loops without inclusion of any IgG specific terms or manual intervention. In cases with poor or few good templates (mean RMSD >7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http

  1. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    NASA Astrophysics Data System (ADS)

    Ge, Shaokui; Smith, Richard G.; Jacovides, Constantinos P.; Kramer, Marc G.; Carruthers, Raymond I.

    2011-08-01

    Plants require solar radiation for photosynthesis and their growth is directly related to the amount received, assuming that other environmental parameters are not limiting. Therefore, precise estimation of photosynthetically active radiation (PAR) is necessary to enhance overall accuracies of plant growth models. This study aimed to explore the PAR radiant flux in the San Francisco Bay Area of northern California. During the growing season (March through August) for 2 years 2007-2008, the on-site magnitudes of photosynthetic photon flux densities (PPFD) were investigated and then processed at both the hourly and daily time scales. Combined with global solar radiation ( R S) and simulated extraterrestrial solar radiation, five PAR-related values were developed, i.e., flux density-based PAR (PPFD), energy-based PAR (PARE), from-flux-to-energy conversion efficiency (fFEC), and the fraction of PAR energy in the global solar radiation (fE), and a new developed indicator—lost PARE percentages (LPR)—when solar radiation penetrates from the extraterrestrial system to the ground. These PAR-related values indicated significant diurnal variation, high values occurring at midday, with the low values occurring in the morning and afternoon hours. During the entire experimental season, the overall mean hourly value of fFEC was found to be 2.17 μmol J-1, while the respective fE value was 0.49. The monthly averages of hourly fFEC and fE at the solar noon time ranged from 2.15 in March to 2.39 μmol J-1 in August and from 0.47 in March to 0.52 in July, respectively. However, the monthly average daily values were relatively constant, and they exhibited a weak seasonal variation, ranging from 2.02 mol MJ-1 and 0.45 (March) to 2.19 mol MJ-1 and 0.48 (June). The mean daily values of fFEC and fE at the solar noon were 2.16 mol MJ-1 and 0.47 across the entire growing season, respectively. Both PPFD and the ever first reported LPR showed strong diurnal patterns. However, they had

  2. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  3. Estimating stand structure using discrete-return lidar: an example from low density, fire prone ponderosa pine forests

    USGS Publications Warehouse

    Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.

    2005-01-01

    The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important

  4. Multiscale Systematic Error Correction via Wavelet-Based Band Splitting and Bayesian Error Modeling in Kepler Light Curves

    NASA Astrophysics Data System (ADS)

    Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

    2012-05-01

    Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

  5. MEASUREMENT OF OAK TREE DENSITY WITH LANDSAT TM DATA FOR ESTIMATING BIOGENIC ISOPRENE EMISSIONS IN TENNESSEE, USA: JOURNAL ARTICLE

    EPA Science Inventory

    JOURNAL NRMRL-RTP-P- 437 Baugh, W., Klinger, L., Guenther, A., and Geron*, C.D. Measurement of Oak Tree Density with Landsat TM Data for Estimating Biogenic Isoprene Emissions in Tennessee, USA. International Journal of Remote Sensing (Taylor and Francis) 22 (14):2793-2810 (2001)...

  6. Primates in Human-Modified and Fragmented Landscapes: The Conservation Relevance of Modelling Habitat and Disturbance Factors in Density Estimation.

    PubMed

    Cavada, Nathalie; Barelli, Claudia; Ciolli, Marco; Rovero, Francesco

    2016-01-01

    Accurate density estimations of threatened animal populations is essential for management and conservation. This is particularly critical for species living in patchy and altered landscapes, as is the case for most tropical forest primates. In this study, we used a hierarchical modelling approach that incorporates the effect of environmental covariates on both the detection (i.e. observation) and the state (i.e. abundance) processes of distance sampling. We applied this method to already published data on three arboreal primates of the Udzungwa Mountains of Tanzania, including the endangered and endemic Udzungwa red colobus (Procolobus gordonorum). The area is a primate hotspot at continental level. Compared to previous, 'canonical' density estimates, we found that the inclusion of covariates in the modelling makes the inference process more informative, as it takes in full account the contrasting habitat and protection levels among forest blocks. The correction of density estimates for imperfect detection was especially critical where animal detectability was low. Relative to our approach, density was underestimated by the canonical distance sampling, particularly in the less protected forest. Group size had an effect on detectability, determining how the observation process varies depending on the socio-ecology of the target species. Lastly, as the inference on density is spatially-explicit to the scale of the covariates used in the modelling, we could confirm that primate densities are highest in low-to-mid elevations, where human disturbance tend to be greater, indicating a considerable resilience by target monkeys in disturbed habitats. However, the marked trend of lower densities in unprotected forests urgently calls for effective forest protection. PMID:26844891

  7. Primates in Human-Modified and Fragmented Landscapes: The Conservation Relevance of Modelling Habitat and Disturbance Factors in Density Estimation

    PubMed Central

    Cavada, Nathalie; Barelli, Claudia; Ciolli, Marco; Rovero, Francesco

    2016-01-01

    Accurate density estimations of threatened animal populations is essential for management and conservation. This is particularly critical for species living in patchy and altered landscapes, as is the case for most tropical forest primates. In this study, we used a hierarchical modelling approach that incorporates the effect of environmental covariates on both the detection (i.e. observation) and the state (i.e. abundance) processes of distance sampling. We applied this method to already published data on three arboreal primates of the Udzungwa Mountains of Tanzania, including the endangered and endemic Udzungwa red colobus (Procolobus gordonorum). The area is a primate hotspot at continental level. Compared to previous, ‘canonical’ density estimates, we found that the inclusion of covariates in the modelling makes the inference process more informative, as it takes in full account the contrasting habitat and protection levels among forest blocks. The correction of density estimates for imperfect detection was especially critical where animal detectability was low. Relative to our approach, density was underestimated by the canonical distance sampling, particularly in the less protected forest. Group size had an effect on detectability, determining how the observation process varies depending on the socio-ecology of the target species. Lastly, as the inference on density is spatially-explicit to the scale of the covariates used in the modelling, we could confirm that primate densities are highest in low-to-mid elevations, where human disturbance tend to be greater, indicating a considerable resilience by target monkeys in disturbed habitats. However, the marked trend of lower densities in unprotected forests urgently calls for effective forest protection. PMID:26844891

  8. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  9. A field comparison of nested grid and trapping web density estimators

    USGS Publications Warehouse

    Jett, D.A.; Nichols, J.D.

    1987-01-01

    The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.

  10. Power spectral density estimation by spline smoothing in the frequency domain

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Thompson, J. R.

    1972-01-01

    An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying FFT techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.

  11. Power spectral density estimation by spline smoothing in the frequency domain.

    NASA Technical Reports Server (NTRS)

    De Figueiredo, R. J. P.; Thompson, J. R.

    1972-01-01

    An approach, based on a global averaging procedure, is presented for estimating the power spectrum of a second order stationary zero-mean ergodic stochastic process from a finite length record. This estimate is derived by smoothing, with a cubic smoothing spline, the naive estimate of the spectrum obtained by applying Fast Fourier Transform techniques to the raw data. By means of digital computer simulated results, a comparison is made between the features of the present approach and those of more classical techniques of spectral estimation.-

  12. Estimation of the density of Martian soil from radiophysical measurements in the 3-centimeter range

    NASA Technical Reports Server (NTRS)

    Krupenio, N. N.

    1977-01-01

    The density of the Martian soil is evaluated at a depth up to one meter using the results of radar measurement at lambda sub 0 = 3.8 cm and polarized radio astronomical measurement at lambda sub 0 = 3.4 cm conducted onboard the automatic interplanetary stations Mars 3 and Mars 5. The average value of the soil density according to all measurements is rho bar = 1.37 plus or minus 0.33 g/ cu cm. A map of the distribution of the permittivity and soil density is derived, which was drawn up according to radiophysical data in the 3 centimeter range.

  13. Density estimates of rural dog populations and an assessment of marking methods during a rabies vaccination campaign in the Philippines.

    PubMed

    Childs, J E; Robinson, L E; Sadek, R; Madden, A; Miranda, M E; Miranda, N L

    1998-01-01

    We estimated the population density of dogs by distance sampling and assessed the potential utility of two marking methods for capture-mark-recapture applications following a mass canine rabies-vaccination campaign in Sorsogon Province, the Republic of the Philippines. Thirty villages selected to assess vaccine coverage and for dog surveys were visited 1 to 11 days after the vaccinating team. Measurements of the distance of dogs or groups of dogs from transect lines were obtained in 1088 instances (N = 1278 dogs; mean group size = 1.2). Various functions modelling the probability of detection were fitted to a truncated distribution of distances of dogs from transect lines. A hazard rate model provided the best fit and an overall estimate of dog-population density of 468/km2 (95% confidence interval, 359 to 611). At vaccination, most dogs were marked with either a paint stick or a black plastic collar. Overall, 34.8% of 2167 and 28.5% of 2115 dogs could be accurately identified as wearing a collar or showing a paint mark; 49.1% of the dogs had either mark. Increasing time interval between vaccination-team visit and dog survey and increasing distance from transect line were inversely associated with the probability of observing a paint mark. Probability of observing a collar was positively associated with increasing estimated density of the dog population in a given village and with animals not associated with a house. The data indicate that distance sampling is a relatively simple and adaptable method for estimating dog-population density and is not prone to problems associated with meeting some model assumptions inherent to mark-recapture estimators. PMID:9500175

  14. Estimates of volumetric bone density from projectional measurements improve the discriminatory capability of dual X-ray absorptiometry

    NASA Technical Reports Server (NTRS)

    Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.

    1995-01-01

    To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are

  15. Estimating the density of honeybee colonies across their natural range to fill the gap in pollinator decline censuses.

    PubMed

    Jaffé, Rodolfo; Dietemann, Vincent; Allsopp, Mike H; Costa, Cecilia; Crewe, Robin M; Dall'olio, Raffaele; DE LA Rúa, Pilar; El-Niweiri, Mogbel A A; Fries, Ingemar; Kezic, Nikola; Meusel, Michael S; Paxton, Robert J; Shaibi, Taher; Stolle, Eckart; Moritz, Robin F A

    2010-04-01

    Although pollinator declines are a global biodiversity threat, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because it is biased by the activity of beekeepers. To fill this gap in pollinator decline censuses and to provide a broad picture of the current status of honeybees across their natural range, we used microsatellite genetic markers to estimate colony densities and genetic diversity at different locations in Europe, Africa, and central Asia that had different patterns of land use. Genetic diversity and colony densities were highest in South Africa and lowest in Northern Europe and were correlated with mean annual temperature. Confounding factors not related to climate, however, are also likely to influence genetic diversity and colony densities in honeybee populations. Land use showed a significantly negative influence over genetic diversity and the density of honeybee colonies over all sampling locations. In Europe honeybees sampled in nature reserves had genetic diversity and colony densities similar to those sampled in agricultural landscapes, which suggests that the former are not wild but may have come from managed hives. Other results also support this idea: putative wild bees were rare in our European samples, and the mean estimated density of honeybee colonies on the continent closely resembled the reported mean number of managed hives. Current densities of European honeybee populations are in the same range as those found in the adverse climatic conditions of the Kalahari and Saharan deserts, which suggests that beekeeping activities do not compensate for the loss of wild colonies. Our findings highlight the importance of reconsidering the conservation status of honeybees in Europe and of regarding beekeeping not only as a profitable business for producing honey, but also as an essential component of biodiversity conservation. PMID:19775273

  16. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  17. Estimation of Vegetation Aerodynamic Roughness of Natural Regions Using Frontal Area Density Determined from Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Crago, Richard

    1994-01-01

    Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.

  18. An assessment study of the wavelet-based index of magnetic storm activity (WISA) and its comparison to the Dst index

    NASA Astrophysics Data System (ADS)

    Xu, Zhonghua; Zhu, Lie; Sojka, Jan; Kokoszka, Piotr; Jach, Agnieszka

    2008-08-01

    A wavelet-based index of storm activity (WISA) has been recently developed [Jach, A., Kokoszka, P., Sojka, L., Zhu, L., 2006. Wavelet-based index of magnetic storm activity. Journal of Geophysical Research 111, A09215, doi:10.1029/2006JA011635] to complement the traditional Dst index. The new index can be computed automatically by using the wavelet-based statistical procedure without human intervention on the selection of quiet days and the removal of secular variations. In addition, the WISA is flexible on data stretch and has a higher temporal resolution (1 min), which can provide a better description of the dynamical variations of magnetic storms. In this work, we perform a systematic assessment study on the WISA index. First, we statistically compare the WISA to the Dst for various quiet and disturbed periods and analyze the differences of their spectral features. Then we quantitatively assess the flexibility of the WISA on data stretch and study the effects of varying number of stations on the index. In addition, the ability of the WISA for handling the missing data is also quantitatively assessed. The assessment results show that the hourly averaged WISA index can describe storm activities equally well as the Dst index, but its full automation, high flexibility on data stretch, easiness of using the data from varying number of stations, high temporal resolution, and high tolerance to missing data from individual station can be very valuable and essential for real-time monitoring of the dynamical variations of magnetic storm activities and space weather applications, thus significantly complementing the existing Dst index.

  19. An assessment study of the wavelet-based index of magnetic storm activity (WISA) and its comparison to the Dst index

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Zhu, L.; Sojka, J. J.; Kokoszka, P.; Jach, A.

    2006-12-01

    A wavelet-based index of storm activities (WISA) has been recently developed (Jach et al., 2006) to complement the traditional Dst index. The new index can be computed automatically using the wavelet-based statistical procedure without human intervention on the selection of quiet days and the removal of secular variations. In addition, the WISA is flexible on data stretch and has a higher temporal resolution (one minute), which can provide a better description of the dynamical variations of magnetic storms. In this work, we perform a systematic assessment study on the WISA index. First, we statistically compare the WISA to the Dst for various quiet and disturbing periods and analyze the differences of their spectrum features. Then we quantitatively assess the flexibility of the WISA on data stretch and study the effects of varying number of stations on the index. In addition, how well the WISA can handle the missing data is also quantitatively assessed. The assessment results show that the hourly-averaged WISA index can describe storm activities equally well as the Dst index, but its full automation, high flexibility on data stretch, easiness of using the data from varying number of stations, high temporal resolution, and high tolerance on missing data from individual station can be very valuable and essential for real-time monitoring of the dynamical variations of magnetic storm activities and space weather applications, thus significantly complementing the existing Dst index. Jach, A., P. Kokoszka, J. Sojka, and L. Zhu, Wavelet-based index of magnetic storm activity, J. Geophys. Res., in press, 2006.

  20. Correlation for the estimation of the density of fatty acid esters fuels and its implications. A proposed Biodiesel Cetane Index.

    PubMed

    Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio

    2010-09-01

    Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. PMID:20599853

  1. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    SciTech Connect

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    2009-03-05

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  2. Inverse estimation of parameters for multidomain flow models in soil columns with different macropore densities

    PubMed Central

    Arora, Bhavna; Mohanty, Binayak P.; McGuire, Jennifer T.

    2013-01-01

    Soil and crop management practices have been found to modify soil structure and alter macropore densities. An ability to accurately determine soil hydraulic parameters and their variation with changes in macropore density is crucial for assessing potential contamination from agricultural chemicals. This study investigates the consequences of using consistent matrix and macropore parameters in simulating preferential flow and bromide transport in soil columns with different macropore densities (no macropore, single macropore, and multiple macropores). As used herein, the term“macropore density” is intended to refer to the number of macropores per unit area. A comparison between continuum-scale models including single-porosity model (SPM), mobile-immobile model (MIM), and dual-permeability model (DPM) that employed these parameters is also conducted. Domain-specific parameters are obtained from inverse modeling of homogeneous (no macropore) and central macropore columns in a deterministic framework and are validated using forward modeling of both low-density (3 macropores) and high-density (19 macropores) multiple-macropore columns. Results indicate that these inversely modeled parameters are successful in describing preferential flow but not tracer transport in both multiple-macropore columns. We believe that lateral exchange between matrix and macropore domains needs better accounting to efficiently simulate preferential transport in the case of dense, closely spaced macropores. Increasing model complexity from SPM to MIM to DPM also improved predictions of preferential flow in the multiple-macropore columns but not in the single-macropore column. This suggests that the use of a more complex model with resolved domain-specific parameters is recommended with an increase in macropore density to generate forecasts with higher accuracy. PMID:24511165

  3. Estimation of density and population size and recommendations for monitoring trends of Bahama parrots on Great Abaco and Great Inagua

    USGS Publications Warehouse

    Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.

    2005-01-01

    Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.

  4. PeaKDEck: a kernel density estimator-based peak calling program for DNaseI-seq data.

    PubMed

    McCarthy, Michael T; O'Callaghan, Christopher A

    2014-05-01

    Hypersensitivity to DNaseI digestion is a hallmark of open chromatin, and DNaseI-seq allows the genome-wide identification of regions of open chromatin. Interpreting these data is challenging, largely because of inherent variation in signal-to-noise ratio between datasets. We have developed PeaKDEck, a peak calling program that distinguishes signal from noise by randomly sampling read densities and using kernel density estimation to generate a dataset-specific probability distribution of random background signal. PeaKDEck uses this probability distribution to select an appropriate read density threshold for peak calling in each dataset. We benchmark PeaKDEck using published ENCODE DNaseI-seq data and other peak calling programs, and demonstrate superior performance in low signal-to-noise ratio datasets. PMID:24407222

  5. Estimation of the low-density (beta) lipoproteins of serum in health and disease using large molecular weight dextran sulphate

    PubMed Central

    Walton, K. W.; Scott, P. J.

    1964-01-01

    Studies have been made of the factors affecting the specificity of the interaction between high molecular weight dextran sulphate and low-density lipoproteins, both in pure solution and in serum. The results have been used in the development of a simple assay method for the serum concentration of low-density lipoproteins in small volumes of serum. The results obtained by this assay procedure have been found to correlate acceptably with parallel estimations of low-density lipoproteins by an ultracentrifugal technique and by paper electrophoresis. The technique has been applied to a survey of serum levels of these proteins in a normal population. The results have been compared with data in the literature. Satisfactory agreement was found between mean levels, matched for age and sex, between the dextran sulphate method and those methods based ultimately on chemical estimation of one or more components of the isolated lipoproteins. A systematic difference was observed when the dextran sulphate method was compared with estimates based on analytical ultracentrifugation or turbidimetry using amylopectin sulphate. Some indication of the range of application of the dextran sulphate method in clinical chemistry is provided. Images PMID:14227432

  6. Estimating the population density of the Asian tapir (Tapirus indicus) in a selectively logged forest in Peninsular Malaysia.

    PubMed

    Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David

    2012-12-01

    The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368

  7. Estimation of the radial size and density fluctuation amplitude of edge localized modes using microwave interferometer array

    NASA Astrophysics Data System (ADS)

    Ayub, M. K.; Yun, G. S.; Leem, J.; Kim, M.; Lee, W.; Park, H. K.

    2016-03-01

    A novel technique to estimate the range of radial size and density fluctuation amplitude of edge localized modes (ELMs) in the KSTAR tokamak plasma is presented. A microwave imaging reflectometry (MIR) system is reconfigured as a multi-channel microwave interferometer array (MIA) to measure the density fluctuations associated with ELMs, while electron cyclotron emission imaging (ECEI) system is used as a reference diagnostics to confirm the MIA observation. Two dimensional full-wave (FWR2D) simulations integrated with optics simulation are performed to investigate the Gaussian beam propagation and reflection through the plasma as well as the MIA optical components and obtain the interferometric phase undulations of individual channels at the detector plane due to ELM perturbation. The simulation results show that the amplitude of the phase undulation depends linearly on both radial size and density perturbation amplitude of ELM. For a typical discharge with ELMs, it is estimated that the ELM structure observed by the MIA system has density perturbation amplitude in the range ~ 7 % to 14 % while radial size in the range ~ 1 to 3 cm.

  8. Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...

  9. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM) Using Monte Carlo Simulation

    PubMed Central

    Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S. M. Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid

    2016-01-01

    Background In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having ‘random’, ‘aggregated’ and ‘regular’ spatial patterns) plant populations and empirical ones. Principal Findings PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N − 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N − 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N − 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. Significance If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all

  10. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    NASA Astrophysics Data System (ADS)

    Sarangi, Bighnaraj; Aggarwal, Shankar G.; Sinha, Deepak; Gupta, Prabhat K.

    2016-03-01

    In this work, we have used a scanning mobility particle sizer (SMPS) and a quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyser (DMA), where size segregation is done based on particle electrical mobility. Downstream of the DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas the other one is sent to the QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of the SMPS and mass concentration data obtained from the QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10-478 nm), i.e. AS, SC and AN, is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, values which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Using this technique, the percentage contribution of error in the measurement of effective density is calculated to be in the range of 9-17 %. Among the individual uncertainty components, repeatability of particle mass obtained by the QCM, the QCM crystal frequency, CPC counting efficiency, and the equivalence of CPC- and QCM-derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of the winter period in New Delhi was measured to be 1.28 ± 0.12 g cm-3

  11. Estimating Brownian motion dispersal rate, longevity and population density from spatially explicit mark-recapture data on tropical butterflies.

    PubMed

    Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J

    2012-07-01

    1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. PMID:22320218

  12. Effects of time-series length and gauge network density on rainfall climatology estimates in Latin America

    NASA Astrophysics Data System (ADS)

    Maeda, E.; Arevalo, J.; Carmona-Moreno, C.

    2012-04-01

    Despite recent advances in the development of satellite sensors for monitoring precipitation at high spatial and temporal resolutions, the assessment of rainfall climatology still relies strongly on ground-station measurements. The Global Historical Climatology Network (GHCN) is one of the most popular stations database available for the international community. Nevertheless, the spatial distribution of these stations is not always homogeneous and the record length largely varies for each station. This study aimed to evaluate how the number of years recorded in the GHCN stations and the density of the network affect the uncertainties of annual rainfall climatology estimates in Latin America. The method applied was divided in two phases. In the first phase, Monte Carlo simulations were performed to evaluate how the number of samples and the characteristics of rainfall regime affect estimates of annual average rainfall. The simulations were performed using gamma distributions with pre-defined parameters, which generated synthetic annual precipitation records. The average and dispersion of the synthetic records were then estimated through the L-moments approach and compared with the original probability distribution that was used to produce the samples. The number of records (n) used in the simulation varied from 10 to 150, reproducing the range of number of years typically found in meteorological stations. A power function, in the form RMSE= f(n) = c.na, where the coefficients were defined as a function of the rainfall statistical dispersion, was applied to fit the errors. In the second phase of the assessment, the results of the simulations were extrapolated to real records obtained by the GHCN over Latin America, creating estimates of errors associated with number of records and rainfall characteristics in each station. To generate a spatially-explicit representation of the uncertainties, the errors in each station were interpolated using the inverse distance

  13. Fourier and Wavelet Based Characterisation of the Ionospheric Response to the Solar Eclipse of August, the 11th, 1999, Measured Through 1-minute Vertical Ionospheric Sounding

    NASA Astrophysics Data System (ADS)

    Sauli, P.; Abry, P.; Boska, J.

    2004-05-01

    The aim of the present work is to study the ionospheric response induced by the solar eclipse of August, the 11th, 1999. We provide Fourier and wavelet based characterisations of the propagation of the acoustic-gravity waves induced by the solar eclipse. The analysed data consist of profiles of electron concentration. They are derived from 1-minute vertical incidence ionospheric sounding measurements, performed at the Pruhonice observatory (Czech republic, 49.9N, 14.5E). The chosen 1-minute high sampling rate aims at enabling us to specifically see modes below acoustic cut-off period. The August period was characterized by Solar Flux F10.7 = 128, steady solar wind, quiet magnetospheric conditions, a low geomagnetic activity (Dst index varies from -10 nT to -20 nT, Σ Kp index reached value of 12+). The eclipse was notably exceptional in uniform solar disk. These conditions and fact that the culmination of the solar eclipse over central Europe occurred at local noon are such that the observed ionospheric response is mainly that of the solar eclipse. We provide a full characterization of the propagation of the waves in terms of times of occurrence, group and phase velocities, propagation direction, characteristic period and lifetime of the particular wave structure. However, ionospheric vertical sounding technique enables us to deal with vertical components of each characteristic. Parameters are estimated combining Fourier and wavelet analysis. Our conclusions confirm earlier theoretical and experimental findings, reported in [Altadill et al., 2001; Farges et al., 2001; Muller-Wodarg et al.,1998] regarding the generation and propagation of gravity waves and provide complementary characterisation using wavelet approaches. We also report a new evidence for the generation and propagation of acoustic waves induced by the solar eclipse through the ionospheric F region. Up to our knowledge, this is the first time that acoustic waves can be demonstrated based on ionospheric

  14. Estimating Population Density of the San Martin Titi Monkey (Callicebus oenanthe) in Peru Using Vocalisations.

    PubMed

    van Kuijk, Silvy M; García-Suikkanen, Carolina; Tello-Alvarado, Julio C; Vermeer, Jan; Hill, Catherine M

    2015-01-01

    We calculated the population density of the critically endangered Callicebus oenanthe in the Ojos de Agua Conservation Concession, a dry forest area in the department of San Martin, Peru. Results showed significant differences (p < 0.01) in group densities between forest boundaries (16.5 groups/km2, IQR = 21.1-11.0) and forest interior (4.0 groups/km2, IQR = 5.0-0.0), suggesting the 2,550-ha area harbours roughly 1,150 titi monkeys. This makes Ojos de Agua an important cornerstone in the conservation of the species, because it is one of the largest protected areas where the species occurs. PMID:26824671

  15. An empirical model to estimate density of sodium hydroxide solution: An activator of geopolymer concretes

    NASA Astrophysics Data System (ADS)

    Rajamane, N. P.; Nataraja, M. C.; Jeyalakshmi, R.; Nithiyanantham, S.

    2016-02-01

    Geopolymer concrete is zero-Portland cement concrete containing alumino-silicate based inorganic polymer as binder. The polymer is obtained by chemical activation of alumina and silica bearing materials, blast furnace slag by highly alkaline solutions such as hydroxide and silicates of alkali metals. Sodium hydroxide solutions of different concentrations are commonly used in making GPC mixes. Often, it is seen that sodium hydroxide solution of very high concentration is diluted with water to obtain SHS of desired concentration. While doing so it was observed that the solute particles of NaOH in SHS tend to occupy lower volumes as the degree of dilution increases. This aspect is discussed in this paper. The observed phenomenon needs to be understood while formulating the GPC mixes since this influences considerably the relationship between concentration and density of SHS. This paper suggests an empirical formula to relate density of SHS directly to concentration expressed by w/w.

  16. Estimating the effective density of engineered nanomaterials for in vitro dosimetry.

    PubMed

    DeLoid, Glen; Cohen, Joel M; Darrah, Tom; Derk, Raymond; Rojanasakul, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip

    2014-01-01

    The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by benchtop centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174

  17. Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…

  18. When bulk density methods matter: Implications for estimating soil organic carbon pools in rocky soils

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...

  19. Comparison of volumetric breast density estimations from mammography and thorax CT

    NASA Astrophysics Data System (ADS)

    Geeraert, N.; Klausz, R.; Cockmartin, L.; Muller, S.; Bosmans, H.; Bloch, I.

    2014-08-01

    Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations.

  20. Estimation of Neutral Density in Edge Plasma with Double Null Configuration in EAST

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Xu, Guosheng; Ding, Siye; Gao, Wei; Wu, Zhenwei; Chen, Yingjie; Huang, Juan; Liu, Xiaoju; Zang, Qing; Chang, Jiafeng; Zhang, Wei; Li, Yingying; Qian, Jinping

    2011-08-01

    In this work, population coefficients of hydrogen's n = 3 excited state from the hydrogen collisional-radiative (CR) model, from the data file of DEGAS 2, are used to calculate the photon emissivity coefficients (PECs) of hydrogen Balmer-α (n = 3 → n = 2) (Hα). The results are compared with the PECs from Atomic Data and Analysis Structure (ADAS) database, and a good agreement is found. A magnetic surface-averaged neutral density profile of typical double-null (DN) plasma in EAST is obtained by using FRANTIC, the 1.5-D fluid transport code. It is found that the sum of integral Dα and Hα emission intensity calculated via the neutral density agrees with the measured results obtained by using the absolutely calibrated multi-channel poloidal photodiode array systems viewing the lower divertor at the last closed flux surface (LCFS). It is revealed that the typical magnetic surface-averaged neutral density at LCFS is about 3.5 × 1016 m-3.

  1. Fiber density estimation from single q-shell diffusion imaging by tensor divergence.

    PubMed

    Reisert, Marco; Mader, Irina; Umarova, Roza; Maier, Simon; Tebartz van Elst, Ludger; Kiselev, Valerij G

    2013-08-15

    Diffusion-weighted magnetic resonance imaging provides information about the nerve fiber bundle geometry of the human brain. While the inference of the underlying fiber bundle orientation only requires single q-shell measurements, the absolute determination of their volume fractions is much more challenging with respect to measurement techniques and analysis. Unfortunately, the usually employed multi-compartment models cannot be applied to single q-shell measurements, because the compartment's diffusivities cannot be resolved. This work proposes an equation for fiber orientation densities that can infer the absolute fraction up to a global factor. This equation, which is inspired by the classical mass preservation law in fluid dynamics, expresses the fiber conservation associated with the assumption that fibers do not terminate in white matter. Simulations on synthetic phantoms show that the approach is able to derive the densities correctly for various configurations. Experiments with a pseudo ground truth phantom show that even for complex, brain-like geometries the method is able to infer the densities correctly. In-vivo results with 81 healthy volunteers are plausible and consistent. A group analysis with respect to age and gender show significant differences, such that the proposed maps can be used as a quantitative measure for group and longitudinal analysis. PMID:23541798

  2. Storage density estimation for the phase-encoding and shift multiplexing holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-09-01

    Holographic optical correlator (HOC) is applicable in occasion where the instant search throughout a huge database is demanded. The primary advantage of the HOC is its inherent parallel processing ability and large storage capacity. The HOC's searching speed is proportional to the storage density. This paper proposes a phase-encoding method in the object beam to increase the storage density. A random phase plate (RPP) is used to encode the phase of the object beam before uploading the data pages to the object beam. By shifting the RPP at a designed interval, the object beam is modulated into an orthogonal object beam to the previous one and a new group of database can be stored. Experimental results verify the proposed method. The maximum storage number of the data pages with a RPP at a fixed position can be as large as 7,500. The crosstalk among different groups of the databases can be unnoticeable. The increase in the storage density of the HOC depends on the number of the orthogonal positions from the different portions of a same RPP.

  3. Estimating the density of intermediate size KBOs from considerations of volatile retention

    NASA Astrophysics Data System (ADS)

    Levi, Amit; Podolak, Morris

    2011-07-01

    By using a hydrodynamic atmospheric escape mechanism (Levi, A., Podolak, M. [2009]. Icarus 202, 681-693) we show how the unusually high mass density of Quaoar could have been predicted (constrained), without any knowledge of a binary companion. We suggest an explanation of the recent spectroscopic observations of Orcus and Charon [Delsanti, A., Merlin, F., Guilbert, A., Bauer, J., Yang, B., Meech, K.J., 2010. Astron. Astrophys. 520, A40; Cook, J.C., Desch, S.J., Roush, T.L., Trujillo, C.A., Geballe, T.R., 2007. Astrophys. J. 663, 1406-1419]. We present a simple relation between the detection of certain volatile ices and the body mass density and diameter. As a test case we implement the relations on the KBO 2003 AZ 84 and give constraints on its mass density. We also present a method of relating the latitude-dependence of hydrodynamic gas escape to the internal structure of a rapidly rotating body and apply it to Haumea.

  4. A wavelet-based method for the forced vibration analysis of piecewise linear single- and multi-DOF systems with application to cracked beam dynamics

    NASA Astrophysics Data System (ADS)

    Joglekar, D. M.; Mitra, M.

    2015-12-01

    The present investigation outlines a method based on the wavelet transform to analyze the vibration response of discrete piecewise linear oscillators, representative of beams with breathing cracks. The displacement and force variables in the governing differential equation are approximated using Daubechies compactly supported wavelets. An iterative scheme is developed to arrive at the optimum transform coefficients, which are back-transformed to obtain the time-domain response. A time-integration scheme, solving a linear complementarity problem at every time step, is devised to validate the proposed wavelet-based method. Applicability of the proposed solution technique is demonstrated by considering several test cases involving a cracked cantilever beam modeled as a bilinear SDOF system subjected to a harmonic excitation. In particular, the presence of higher-order harmonics, originating from the piecewise linear behavior, is confirmed in all the test cases. Parametric study involving the variations in the crack depth, and crack location is performed to bring out their effect on the relative strengths of higher-order harmonics. Versatility of the method is demonstrated by considering the cases such as mixed-frequency excitation and an MDOF oscillator with multiple bilinear springs. In addition to purporting the wavelet-based method as a viable alternative to analyze the response of piecewise linear oscillators, the proposed method can be easily extended to solve inverse problems unlike the other direct time integration schemes.

  5. Evaluating the Performance of Wavelet-based Data-driven Models for Multistep-ahead Flood Forecasting in an Urbanized Watershed

    NASA Astrophysics Data System (ADS)

    Kasaee Roodsari, B.; Chandler, D. G.

    2015-12-01

    A real-time flood forecast system is presented to provide emergency management authorities sufficient lead time to execute plans for evacuation and asset protection in urban watersheds. This study investigates the performance of two hybrid models for real-time flood forecasting at different subcatchments of Ley Creek watershed, a heavily urbanized watershed in the vicinity of Syracuse, New York. Hybrid models include Wavelet-Based Artificial Neural Network (WANN) and Wavelet-Based Adaptive Neuro-Fuzzy Inference System (WANFIS). Both models are developed on the basis of real time stream network sensing. The wavelet approach is applied to decompose the collected water depth timeseries to Approximation and Detail components. The Approximation component is then used as an input to ANN and ANFIS models to forecast water level at lead times of 1 to 10 hours. The performance of WANN and WANFIS models are compared to ANN and ANFIS models for different lead times. Initial results demonstrated greater predictive power of hybrid models.

  6. Wavelet-based multiscale analysis of bioimpedance data measured by electric cell-substrate impedance sensing for classification of cancerous and normal cells

    NASA Astrophysics Data System (ADS)

    Das, Debanjan; Shiladitya, Kumar; Biswas, Karabi; Dutta, Pranab Kumar; Parekh, Aditya; Mandal, Mahitosh; Das, Soumen

    2015-12-01

    The paper presents a study to differentiate normal and cancerous cells using label-free bioimpedance signal measured by electric cell-substrate impedance sensing. The real-time-measured bioimpedance data of human breast cancer cells and human epithelial normal cells employs fluctuations of impedance value due to cellular micromotions resulting from dynamic structural rearrangement of membrane protrusions under nonagitated condition. Here, a wavelet-based multiscale quantitative analysis technique has been applied to analyze the fluctuations in bioimpedance. The study demonstrates a method to classify cancerous and normal cells from the signature of their impedance fluctuations. The fluctuations associated with cellular micromotion are quantified in terms of cellular energy, cellular power dissipation, and cellular moments. The cellular energy and power dissipation are found higher for cancerous cells associated with higher micromotions in cancer cells. The initial study suggests that proposed wavelet-based quantitative technique promises to be an effective method to analyze real-time bioimpedance signal for distinguishing cancer and normal cells.

  7. Aerosol effective density measurement using scanning mobility particle sizer and quartz crystal microbalance with the estimation of involved uncertainty

    NASA Astrophysics Data System (ADS)

    Sarangi, B.; Aggarwal, S. G.; Sinha, D.; Gupta, P. K.

    2015-12-01

    In this work, we have used scanning mobility particle sizer (SMPS) and quartz crystal microbalance (QCM) to estimate the effective density of aerosol particles. This approach is tested for aerosolized particles generated from the solution of standard materials of known density, i.e. ammonium sulfate (AS), ammonium nitrate (AN) and sodium chloride (SC), and also applied for ambient measurement in New Delhi. We also discuss uncertainty involved in the measurement. In this method, dried particles are introduced in to a differential mobility analyzer (DMA), where size segregation was done based on particle electrical mobility. At the downstream of DMA, the aerosol stream is subdivided into two parts. One is sent to a condensation particle counter (CPC) to measure particle number concentration, whereas other one is sent to QCM to measure the particle mass concentration simultaneously. Based on particle volume derived from size distribution data of SMPS and mass concentration data obtained from QCM, the mean effective density (ρeff) with uncertainty of inorganic salt particles (for particle count mean diameter (CMD) over a size range 10 to 478 nm), i.e. AS, SC and AN is estimated to be 1.76 ± 0.24, 2.08 ± 0.19 and 1.69 ± 0.28 g cm-3, which are comparable with the material density (ρ) values, 1.77, 2.17 and 1.72 g cm-3, respectively. Among individual uncertainty components, repeatability of particle mass obtained by QCM, QCM crystal frequency, CPC counting efficiency, and equivalence of CPC and QCM derived volume are the major contributors to the expanded uncertainty (at k = 2) in comparison to other components, e.g. diffusion correction, charge correction, etc. Effective density for ambient particles at the beginning of winter period in New Delhi is measured to be 1.28 ± 0.12 g cm-3. It was found that in general, mid-day effective density of ambient aerosols increases with increase in CMD of particle size measurement but particle photochemistry is an important

  8. Comparison study between coherent echoes at VHF range and electron density estimated by Ionosphere Model for Auroral Zone

    NASA Astrophysics Data System (ADS)

    Nishiyama, Takanori; Nakamura, Takuji; Tsutsumi, Masaki; Tanaka, Yoshi; Nishimura, Koji; Sato, Kaoru; Tomikawa, Yoshihiro; Kohma, Masashi

    2016-07-01

    Polar Mesosphere Winter Echo (PMWE) is known as back scatter echo from 55 to 85 km in the mesosphere, and it has been observed by MST and IS radar in polar region during non-summer period. Since density of free electrons as scatterer is low in the dark mesosphere during winter, it is suggested that PMWE requires strong ionization of neutral atmosphere associated with Energetic Particles Precipitations (EPPs) during Solar Proton Events [Kirkwood et al., 2002] or during geomagnetically disturbed periods [Nishiyama et al., 2015]. However, studies on relationship between occurrence of PMWE and background electron density has been limited yet [Lübken et al., 2006], partly because the PMWE occurrence rate is known to be quite low (2.9%) [Zeller et al., 2006]. The PANSY (Program of the Antarctic Syowa MST/IS) radar, which is the largest MST radar in Antarctica, observed many PMWE events since it has started mesosphere observations in June 2012. We established an application method of the PANSY radar as riometer, which makes it possible to estimate Cosmic Noise Absorptions (CNA) as proxy of relative variations on background electron density. In addition, electron density profiles from 60 to 150 km altitude are calculated by Ionospheric Model for the Auroral Zone (IMAZ) [McKinnell and Friedrich, 2007] and CNA estimated by the PANSY radar. In this presentation, we would like to focus on strong PMWE during two big geomagnetic storm events, St. Patrick's Day and the Summer Solstice 2015 Event, in order to compare observed PMWE characteristics to model background electron density. On March 19 and 22, recovery phase of St. Patrick's Day Storm, sudden PMWE intensification was detected near 60 km by the PANSY radar. At the same time, strong Cosmic Noise Absorptions (CNA) of 0.8 dB and 1.0 dB were measured, respectively. However, calculated electron density profiles did not show high electron density at the altitude where the PMWE intensification were observed. On June 22, the

  9. Linkage Disequilibrium Estimation of Chinese Beef Simmental Cattle Using High-density SNP Panels

    PubMed Central

    Zhu, M.; Zhu, B.; Wang, Y. H.; Wu, Y.; Xu, L.; Guo, L. P.; Yuan, Z. R.; Zhang, L. P.; Gao, X.; Gao, H. J.; Xu, S. Z.; Li, J. Y.

    2013-01-01

    Linkage disequilibrium (LD) plays an important role in genomic selection and mapping quantitative trait loci (QTL). In this study, the pattern of LD and effective population size (Ne) were investigated in Chinese beef Simmental cattle. A total of 640 bulls were genotyped with IlluminaBovinSNP50BeadChip and IlluminaBovinHDBeadChip. We estimated LD for each autosomal chromosome at the distance between two random SNPs of <0 to 25 kb, 25 to 50 kb, 50 to 100 kb, 100 to 500 kb, 0.5 to 1 Mb, 1 to 5 Mb and 5 to 10 Mb. The mean values of r2 were 0.30, 0.16 and 0.08, when the separation between SNPs ranged from 0 to 25 kb to 50 to 100 kb and then to 0.5 to 1 Mb, respectively. The LD estimates decreased as the distance increased in SNP pairs, and increased with the increase of minor allelic frequency (MAF) and with the decrease of sample sizes. Estimates of effective population size for Chinese beef Simmental cattle decreased in the past generations and Ne was 73 at five generations ago. PMID:25049849

  10. Estimating ŋ/s of QCD matter at high baryon densities

    NASA Astrophysics Data System (ADS)

    Karpenko, Iu.; Bleicher, M.; Huovinen, P.; Petersen, H.

    2016-01-01

    We report on the application of a cascade + viscous hydro + cascade model for heavy ion collisions in the RHIC Beam Energy Scan range, √snn = 6.3…200 GeV. By constraining model parameters to reproduce the data we find that the effective (average) value of the shear viscosity over entropy density ratio ŋ/s decreases from 0.2 to 0.08 when collision energy grows from √sNN ≈ 7 to 39 GeV.

  11. Primate diversity, habitat preferences, and population density estimates in Noel Kempff Mercado National Park, Santa Cruz Department, Bolivia.

    PubMed

    Wallace, R B; Painter, R L; Taber, A B

    1998-01-01

    This report documents primate communities at two sites within Noel Kempff Mercado National Park in northeastern Santa Cruz Department, Bolivia. Diurnal line transects and incidental observations were employed to survey two field sites, Lago Caiman and Las Gamas, providing information on primate diversity, habitat preferences, relative abundance, and population density. Primate diversity at both sites was not particularly high, with six observed species: Callithrix argentata melanura, Aotus azarae, Cebus apella, Alouatta caraya, A. seniculus, and Ateles paniscus chamek. Cebus showed no significant habitat preferences at Lago Caiman and was also more generalist in use of forest strata, whereas Ateles clearly preferred the upper levels of structurally tall forest. Callithrix argentata melanura was rarely encountered during surveys at Lago Caiman, where it preferred low vine forest. Both species of Alouatta showed restricted habitat use and were sympatric in Igapo forest in the Lago Caiman area. The most abundant primate at both field sites was Ateles, with density estimates reaching 32.1 individuals/km2 in the lowland forest at Lago Caiman, compared to 14.1 individuals/km2 for Cebus. Both Ateles and Cebus were absent from smaller patches of gallery forest at Las Gamas. These densities are compared with estimates from other Neotropical sites. The diversity of habitats and their different floristic composition may account for the numerical dominance of Ateles within the primate communities at both sites. PMID:9802511

  12. Gaussian regression and power spectral density estimation with missing data: The MICROSCOPE space mission as a case study

    NASA Astrophysics Data System (ADS)

    Baghi, Quentin; Métris, Gilles; Bergé, Joël; Christophe, Bruno; Touboul, Pierre; Rodrigues, Manuel

    2016-06-01

    We present a Gaussian regression method for time series with missing data and stationary residuals of unknown power spectral density (PSD). The missing data are efficiently estimated by their conditional expectation as in universal Kriging based on the circulant approximation of the complete data covariance. After initialization with an autoregressive fit of the noise, a few iterations of estimation/reconstruction steps are performed until convergence of the regression and PSD estimates, in a way similar to the expectation-conditional-maximization algorithm. The estimation can be performed for an arbitrary PSD provided that it is sufficiently smooth. The algorithm is developed in the framework of the MICROSCOPE space mission whose goal is to test the weak equivalence principle (WEP) with a precision of 10-15. We show by numerical simulations that the developed method allows us to meet three major requirements: to maintain the targeted precision of the WEP test in spite of the loss of data, to calculate a reliable estimate of this precision and of the noise level, and finally to provide consistent and faithful reconstructed data to the scientific community.

  13. Winter wheat stand density determination and yield estimates from handheld and airborne scanners. [Montana

    NASA Technical Reports Server (NTRS)

    Aase, J. K.; Millard, J. P.; Siddoway, F. H. (Principal Investigator)

    1982-01-01

    Radiance measurements from handheld (Exotech 100-A) and air-borne (Daedalus DEI 1260) radiometers were related to wheat (Triticum aestivum L.) stand densities (simulated winter wheat winterkill) and to grain yield for a field located 11 km northwest of Sidney, Montana, on a Williams loam soil (fine-loamy, mixed Typic Argiborolls) where a semidwarf hard red spring wheat cultivar was needed to stand. Radiances were measured with the handheld radiometer on clear mornings throughout the growing season. Aircraft overflight measurements were made at the end of tillering and during the early stem extension period, and the mid-heading period. The IR/red ratio and normalized difference vegetation index were used in the analysis. The aircraft measurements corroborated the ground measurements inasmuch as wheat stand densities were detected and could be evaluated at an early enough growth stage to make management decision. The aircraft measurements also corroborated handheld measurements when related to yield prediction. The IR/red ratio, although there was some growth stage dependency, related well to yield when measured from just past tillering until about the watery-ripe stage.

  14. Estimating the effective density of engineered nanomaterials for in vitro dosimetry

    PubMed Central

    DeLoid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Wang, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip

    2014-01-01

    The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by bench-top centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174

  15. Using Clinical Factors and Mammographic Breast Density to Estimate Breast Cancer Risk: Development and Validation of a New Predictive Model

    PubMed Central

    Tice, Jeffrey A.; Cummings, Steven R.; Smith-Bindman, Rebecca; Ichikawa, Laura; Barlow, William E.; Kerlikowske, Karla

    2009-01-01

    Background Current models for assessing breast cancer risk are complex and do not include breast density, a strong risk factor for breast cancer that is routinely reported with mammography. Objective To develop and validate an easy-to-use breast cancer risk prediction model that includes breast density. Design Empirical model based on Surveillance, Epidemiology, and End Results incidence, and relative hazards from a prospective cohort. Setting Screening mammography sites participating in the Breast Cancer Surveillance Consortium. Patients 1 095 484 women undergoing mammography who had no previous diagnosis of breast cancer. Measurements Self-reported age, race or ethnicity, family history of breast cancer, and history of breast biopsy. Community radiologists rated breast density by using 4 Breast Imaging Reporting and Data System categories. Results During 5.3 years of follow-up, invasive breast cancer was diagnosed in 14 766 women. The breast density model was well calibrated overall (expected–observed ratio, 1.03 [95% CI, 0.99 to 1.06]) and in racial and ethnic subgroups. It had modest discriminatory accuracy (concordance index, 0.66 [CI, 0.65 to 0.67]). Women with low-density mammograms had 5-year risks less than 1.67% unless they had a family history of breast cancer and were older than age 65 years. Limitation The model has only modest ability to discriminate between women who will develop breast cancer and those who will not. Conclusion A breast cancer prediction model that incorporates routinely reported measures of breast density can estimate 5-year risk for invasive breast cancer. Its accuracy needs to be further evaluated in independent populations before it can be recommended for clinical use. PMID:18316752

  16. Individual movements and population density estimates for moray eels on a Caribbean coral reef

    NASA Astrophysics Data System (ADS)

    Abrams, R. W.; Schein, M. W.

    1986-12-01

    Observations of moray eel (Muraenidae) distribution made on a Caribbean coral reef are discussed in the context of long term population trends. Observations of eel distribution made using SCUBA during 1978, 1979 1980, and 1984 are compared and related to the occurrence of a hurricane in 1979. An estimate of the mean standing stock of moray eels is presented. The degree of site attachment is discussed for spotted morays ( Gymnothorax moringa) and goldentail morays ( Muraena miliaris). The repeated non-aggressive association of moray eels with large aggregations of potential prey fishes is detailed.

  17. Non-convex model of the binary asteroid (809) Lundia and its density estimation

    NASA Astrophysics Data System (ADS)

    Kryszczynska, A.; Bartczak, P.; Polinska, M.; Colas, F.

    2014-07-01

    Introduction: (809) Lundia was classified as a V-type asteroid in the Flora family (Florczak et.al. 2002). The binary nature of (809) Lundia was discovered in September 2005 based on photometric observations. The first modeling of the Lundia synchronous binary system was based on 22 lightcurves obtained at Borowiec and Pic du Midi Observatories during two oppositions in 2005/2006 and 2006/2007. Two methods of modeling --- modified Roche ellipsoids and kinematic --- gave similar parameters for the system (Kryszczynska et al. 2009). The poles of the orbit in ecliptic coordinates were: longitude 118° and latitude 28° in the modified Roche model and 120°, 18°, respectively, in the kinematic model. The orbital period obtained from the lightcurve analysis as well as from modeling was 15.418 h. The obtained bulk density of both components was 1.64 or 1.71 g/ccm. Observations: We observed (809) Lundia in the 2008, 2009/2010, 2011, and 2012 oppositions at the Borowiec, Pic du Midi, Prompt, and Rozhen Observatories. As predicted, the visible eclipses/occultation events were observed only in 2011. Currently, our dataset consists of 45 individual lightcurves and they were all used in the new modeling. Method: We used new method of modeling based on a genetic algorithm that is able to create a non-convex asteroid shape model, rotational period, and spin-axis orientation of a single or binary asteroid, using only photometric observations. The details of the method are presented in the poster by Bartczak et al., at this conference. Results: The new non-convex model of (809) Lundia is presented in the figure. The parameters of the system in the ecliptic coordinates are: longitude 122°, latitude 22°, and sidereal period 15.41574 h. They are very similar to the values obtained before. However, assuming an equivalent diameter of a single body of 9.1 km from the Spitzer observations (Marchis et al. 2012) and the volume of the two modeled bodies, the separation of the components

  18. Psychophysical estimates of visual pigment densities in red-green dichromats.

    PubMed

    Miller, S S

    1972-05-01

    1. The spectral sensitivity of red-green dichromats was determined using heterochromatic flicker photometric matches (25-30 c/s) on the fovea. These matches are upset after a bright bleach and consequently the spectral sensitivity is altered.2. Preliminary experiments indicate that under the conditions in which these experiments were performed, the blue cone mechanism of deuteranopes and protanopes cannot follow 20 c/s flicker. If dichromats lack one of the normal pigments then the upset of these matches monitors the change in spectral sensitivity of a single mechanism.3. After a bleach which removes all the cone pigments, the spectral sensitivity recovers with the time course of pigment kinetics as measured by densitometry.4. An intense background also changes the relative spectral sensitivity of the dichromats. On real equilibrium backgrounds, the changes in spectral sensitivity follow those predicted by the pigment changes measured by densitometry. The predicted changes are obtained by modifying the Rushton equilibrium equation to take into account the density of pigment.5. The relationship of these changes to the luminance of the background is independent of the colour of the background light.6. In contradistinction the effect is dependent on the colour of the lights which were flickered. These experiments indicate that a narrowing of the spectral sensitivity curves takes place on both sides of the dichromats' lambda(max).7. The change in relative spectral sensitivity as a function of background intensity was also determined by increment threshold measurements. These changes can be expressed in terms of deviations from Weber's law (DeltaI/I = const.) if DeltaI and I represent the number of chromophores destroyed by the test and background.8. The relative spectral sensitivity of the dichromat was changed by decentering the point of pupil entry. This upset was abolished by bleaching. The size of the upset was correlated with the magnitude of the S-C I effect.9

  19. Modeling and estimation of production rate for the production phase of non-growth-associated high cell density processes.

    PubMed

    Jamilis, Martín; Garelli, Fabricio; Mozumder, Md Salatul Islam; Castañeda, Teresita; De Battista, Hernán

    2015-10-01

    This paper addresses the estimation of the specific production rate of intracellular products and the modeling of the bioreactor volume dynamics in high cell density fed-batch reactors. In particular, a new model for the bioreactor volume is proposed, suitable to be used in high cell density cultures where large amounts of intracellular products are stored. Based on the proposed volume model, two forms of a high-order sliding mode observer are proposed. Each form corresponds to the cases with residual biomass concentration or volume measurement, respectively. The observers achieve finite time convergence and robustness to process uncertainties as the kinetic model is not required. Stability proofs for the proposed observer are given. The observer algorithm is assessed numerically and experimentally. PMID:26149912

  20. Estimation of effective hydrologic properties of soils from observations of vegetation density

    NASA Technical Reports Server (NTRS)

    Tellers, T. E.; Eagleson, P. S.

    1980-01-01

    A one-dimensional model of the annual water balance is reviewed. Improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate soil system, is verified through comparisons with observed data. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides verification of the soil-selection procedure. This method of parameterization of the land surface is useful with global circulation models, enabling them to account for both the nonlinearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.