Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets
NASA Astrophysics Data System (ADS)
Cifter, Atilla
2011-06-01
This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.
Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation
NASA Astrophysics Data System (ADS)
Lei, Sheau-Fang; Tung, Ying-Kai
Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.
Estimation of Modal Parameters Using a Wavelet-Based Approach
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty; Haley, Sidney M.
1997-01-01
Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.
Metwally, Khaled; Lefevre, Emmanuelle; Baron, Cécile; Zheng, Rui; Pithioux, Martine; Lasaygues, Philippe
2016-02-01
When assessing ultrasonic measurements of material parameters, the signal processing is an important part of the inverse problem. Measurements of thickness, ultrasonic wave velocity and mass density are required for such assessments. This study investigates the feasibility and the robustness of a wavelet-based processing (WBP) method based on a Jaffard-Meyer algorithm for calculating these parameters simultaneously and independently, using one single ultrasonic signal in the reflection mode. The appropriate transmitted incident wave, correlated with the mathematical properties of the wavelet decomposition, was determined using a adapted identification procedure to build a mathematically equivalent model for the electro-acoustic system. The method was tested on three groups of samples (polyurethane resin, bone and wood) using one 1-MHz transducer. For thickness and velocity measurements, the WBP method gave a relative error lower than 1.5%. The relative errors in the mass density measurements ranged between 0.70% and 2.59%. Despite discrepancies between manufactured and biological samples, the results obtained on the three groups of samples using the WBP method in the reflection mode were remarkably consistent, indicating that it is a reliable and efficient means of simultaneously assessing the thickness and the velocity of the ultrasonic wave propagating in the medium, and the apparent mass density of material. PMID:26403278
Wavelet-based polarimetry analysis
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik
2014-06-01
Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.
Varying kernel density estimation on ?+
Mnatsakanov, Robert; Sarkisian, Khachatur
2015-01-01
In this article a new nonparametric density estimator based on the sequence of asymmetric kernels is proposed. This method is natural when estimating an unknown density function of a positive random variable. The rates of Mean Squared Error, Mean Integrated Squared Error, and the L1-consistency are investigated. Simulation studies are conducted to compare a new estimator and its modified version with traditional kernel density construction.
WSPM: wavelet-based statistical parametric mapping.
Van De Ville, Dimitri; Seghier, Mohamed L; Lazeyras, François; Blu, Thierry; Unser, Michael
2007-10-01
Recently, we have introduced an integrated framework that combines wavelet-based processing with statistical testing in the spatial domain. In this paper, we propose two important enhancements of the framework. First, we revisit the underlying paradigm; i.e., that the effect of the wavelet processing can be considered as an adaptive denoising step to "improve" the parameter map, followed by a statistical detection procedure that takes into account the non-linear processing of the data. With an appropriate modification of the framework, we show that it is possible to reduce the spatial bias of the method with respect to the best linear estimate, providing conservative results that are closer to the original data. Second, we propose an extension of our earlier technique that compensates for the lack of shift-invariance of the wavelet transform. We demonstrate experimentally that both enhancements have a positive effect on performance. In particular, we present a reproducibility study for multi-session data that compares WSPM against SPM with different amounts of smoothing. The full approach is available as a toolbox, named WSPM, for the SPM2 software; it takes advantage of multiple options and features of SPM such as the general linear model. PMID:17689101
Wavelet-based algorithm for mesocyclone detection
NASA Astrophysics Data System (ADS)
Desrochers, Paul R.; Yee, Samuel Y. K.
1997-10-01
Severe weather such as tornadoes and large hail often emanates from thunderstorms that have persistent, well organized, rotating updrafts. These rotating updrafts, which are generally referred to as mesocyclones, appear as couplets of incoming and outgoing radial velocities to a single Doppler radar. Observations of mesocyclones reveal useful information on the kinematics in the vicinity of the storm updraft that, if properly interpreted, can be used to assess the likelihood and intensity of the severe weather. Automated algorithms for such assessments exist, but are inconsistent in their wind shear estimations and are prone to high false alarm rates. Reported here are the elements of a new approach that we believe will alleviate the shortcomings of previous mesocyclone detection algorithms. This wavelet-based approach enables us to focus on the known scales where mesocyclones reside. Common data quality problems associated with radar data such as noise and data gaps are handled effectively by the approach presented here. We demonstrate our approach with a 1D test pattern, then with a 2D synthetic mesocyclone vortex, and finally with a case study.
Multivariate Density Estimation and Visualization
Scott, David W.
Multivariate Density Estimation and Visualization David W. Scott 1 Rice University, Department Introduction This chapter examines the use of flexible methods to approximate an unknown density function, and techniques appropriate for visualization of densities in up to four dimensions. The statistical analysis
Shantia Yarahmadian; Vineetha Menon; Majid Mahrooghy; Vahid A. Rezania
2015-10-25
Recent studies has revealed that Microtubules (MTs) exhibit three transition states of growth, shrinkage and pause. In this paper, we first introduce a three states random evolution model as a framework for studying MTs dynamics in three transition states of growth, pause and shrinkage. Then, we introduce a non-traditional stack run encoding scheme with 5 symbols for detecting transition states as well as to encode MT experimental data. The peak detection is carried out in the wavelet domain to effectively detect these three transition states. One of the added advantages of including peak information while encoding being that it enables to detect the peaks efficiently and encodes them simultaneously in the wavelet domain without having the need to do further processing after the decoding stage. Experimental results show that using this form of non-traditional stack run encoding has better compression and reconstruction performance as opposed to traditional stack run encoding and run length encoding schemes. Parameters for MTs modeled in the three states are estimated and is shown to closely approximate original MT data for lower compression rates. As the compression rate increases, we may end up throwing away details that are required to detect transition states of MTs. Thus, choosing the right compression rate is a trade-off between admissible level of error in signal reconstruction, its parameter estimation and considerable rate of compression of MT data.
Wavelet based inversion of gravity data Fabio Boschetti
Boschetti, Fabio
and present a wavelet-based inversion of gravity data. The aim of this inversion, like that of any geophysical1 Wavelet based inversion of gravity data Fabio Boschetti CSIRO Exploration & Mining and Australian Running Heading: Wavelet based inversion of gravity data #12;2 ABSTRACT The Green's function
Dynamic speckle processing using wavelets based entropy
NASA Astrophysics Data System (ADS)
Passoni, I.; Dai Pra, A.; Rabal, H.; Trivi, M.; Arizaga, R.
2005-02-01
Dynamic speckle has been used in some biologic and industrial applications for the characterization of transient processes. The time evolution of processes that show the dynamic speckle phenomenon are here characterized using wavelets based entropy and employed in order to make quantitative and qualitative measurements of the sample activity. Both the results obtained in experiments on drying of paint and activity images in seeds and bruised fruits are shown as examples.
Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach
NASA Astrophysics Data System (ADS)
Aloui, Chaker; Jammazi, Rania
2015-10-01
In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.
Wavelet-based LASSO in functional linear regression
Zhao, Yihong; Ogden, R. Todd; Reiss, Philip T.
2011-01-01
In linear regression with functional predictors and scalar responses, it may be advantageous, particularly if the function is thought to contain features at many scales, to restrict the coefficient function to the span of a wavelet basis, thereby converting the problem into one of variable selection. If the coefficient function is sparsely represented in the wavelet domain, we may employ the well-known LASSO to select a relatively small number of nonzero wavelet coefficients. This is a natural approach to take but to date, the properties of such an estimator have not been studied. In this paper we describe the wavelet-based LASSO approach to regressing scalars on functions and investigate both its asymptotic convergence and its finite-sample performance through both simulation and real-data application. We compare the performance of this approach with existing methods and find that the wavelet-based LASSO performs relatively well, particularly when the true coefficient function is spiky. Source code to implement the method and data sets used in the study are provided as supplemental materials available online. PMID:23794794
QUANTIFYING DEMOCRACY OF WAVELET BASES IN LORENTZ SPACES
Martell, José María
QUANTIFYING DEMOCRACY OF WAVELET BASES IN LORENTZ SPACES EUGENIO HERN´ANDEZ, JOS´E MAR´IA MARTELL it is interesting to ask how far wavelet bases are from being democratic in Lp,q (Rd ), p = q. To quantify democracy
Multivariate Density Estimation: An SVM Approach
Mukherjee, Sayan
1999-04-01
We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. ...
Wavelet-based acoustic recognition of aircraft
Dress, W.B.; Kercel, S.W.
1994-09-01
We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.
DENSITY ESTIMATION BY TOTAL VARIATION REGULARIZATION
Mizera, Ivan
DENSITY ESTIMATION BY TOTAL VARIATION REGULARIZATION ROGER KOENKER AND IVAN MIZERA Abstract. L1 based on total variation of the estimated density, its square root, and its logarithm Â and their derivatives Â in the context of univariate and bivariate density estimation, and compare the results to some
Wavelet-based analysis of circadian behavioral rhythms.
Leise, Tanya L
2015-01-01
The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453
Density Estimation Trees in High Energy Physics
Lucio Anderlini
2015-02-03
Density Estimation Trees can play an important role in exploratory data analysis for multidimensional, multi-modal data models of large samples. I briefly discuss the algorithm, a self-optimization technique based on kernel density estimation, and some applications in High Energy Physics.
Sparse Density Estimation on the Multinomial Manifold.
Hong, Xia; Gao, Junbin; Chen, Sheng; Zia, Tanveer
2015-11-01
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators. PMID:25647665
Risk Bounds for Mixture Density Estimation
Rakhlin, Alexander
2004-01-27
In this paper we focus on the problem of estimating a boundeddensity using a finite combination of densities from a givenclass. We consider the Maximum Likelihood Procedure (MLE) and the greedy procedure described by Li ...
ESTIMATES OF BIOMASS DENSITY FOR TROPICAL FORESTS
An accurate estimation of the biomass density in forests is a necessary step in understanding the global carbon cycle and production of other atmospheric trace gases from biomass burning. n this paper the authors summarize the various approaches that have developed for estimating...
3D Wavelet-Based Filter and Method
Moss, William C. (San Mateo, CA); Haase, Sebastian (San Francisco, CA); Sedat, John W. (San Francisco, CA)
2008-08-12
A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.
Wavelet-Based Surrogates for Testing Time Series
Percival, Don
Wavelet-Based Surrogates for Testing Time Series Don Percival Applied Physics Lab, University. Davison, EPFL (Lausanne, Switzerland) #12;Overview of Talk · background on surrogate data): method of surrogate data (useful for identifying nonlinear time series) let X be time series
WaveletBased Surrogates for Testing Time Series
Percival, Don
WaveletBased Surrogates for Testing Time Series Don Percival Applied Physics Lab, University. Davison, EPFL (Lausanne, Switzerland) #12; Overview of Talk . background on surrogate data): method of surrogate data (useful for identifying nonlinear time series) -- let X be time series
Enhancing Hyperspectral Data Throughput Utilizing Wavelet-Based Fingerprints
I. W. Ginsberg
1999-09-01
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.
Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures
Percival, Don
Wavelet-Based Multiresolution Analysis of Wivenhoe Dam Water Temperatures Don Percival Applied- tations Xt recorded at dam wall (temperature is regarded as important driver for other water quality observations (e.g., = 2 hours for water temperature time series) - t is time index for element Xt · wavelet
Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing
Carin, Lawrence
1 Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing Lihan He and Lawrence Carin,lcarin}@ece.duke.edu EDICS: DSP-RECO, MAL-BAYL Abstract Bayesian compressive sensing (CS) is considered for signals-of-the-art compressive-sensing inversion algorithms. Index Terms Bayesian signal processing, wavelets, sparseness
Wavelet-Based Feature Extraction for Microarray Data Classification
Kwok, James Tin-Yau
Wavelet-Based Feature Extraction for Microarray Data Classification Shutao Li, Chen Liao, James T. Kwok Abstract-- Microarray data typically have thousands of genes, and thus feature extraction to humans, it has become one of the top life threats. In recent years, the study of DNA microarray has
Wavelet-based regularity analysis reveals recurrent spatiotemporal behavior in resting-state fMRI.
Smith, Robert X; Jann, Kay; Ances, Beau; Wang, Danny J J
2015-09-01
One of the major findings from multimodal neuroimaging studies in the past decade is that the human brain is anatomically and functionally organized into large-scale networks. In resting state fMRI (rs-fMRI), spatial patterns emerge when temporal correlations between various brain regions are tallied, evidencing networks of ongoing intercortical cooperation. However, the dynamic structure governing the brain's spontaneous activity is far less understood due to the short and noisy nature of the rs-fMRI signal. Here, we develop a wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform to measure recurrent temporal pattern stability within the rs-fMRI signal across multiple temporal scales. The method consists of performing a stationary wavelet transform to preserve signal structure, followed by construction of "lagged" subsequences to adjust for correlated features, and finally the calculation of sample entropy across wavelet scales based on an "objective" estimate of noise level at each scale. We found that the brain's default mode network (DMN) areas manifest a higher level of irregularity in rs-fMRI time series than rest of the brain. In 25 aged subjects with mild cognitive impairment and 25 matched healthy controls, wavelet-based regularity analysis showed improved sensitivity in detecting changes in the regularity of rs-fMRI signals between the two groups within the DMN and executive control networks, compared with standard multiscale entropy analysis. Wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform is a promising technique to characterize the dynamic structure of rs-fMRI as well as other biological signals. PMID:26096080
Deriving Atmospheric Density Estimates Using Satellite Precision Orbit Ephemerides
Hiatt, Andrew Timothy
2009-01-01
on the POE density estimate accuracy. POE density estimate overlap regions demonstrated a method of determining the consistency of the solutions. Gravity Recovery and Climate Experiment POE density estimates showed consistent results with the Challenging Mini...
RICE UNIVERSITY Multiscale Analysis for Intensity and Density Estimation
Willett, Rebecca
RICE UNIVERSITY Multiscale Analysis for Intensity and Density Estimation by Rebecca M. Willett Analysis for Intensity and Density Estimation by Rebecca M. Willett The nonparametric multiscale polynomial
Density estimation by maximum quantum entropy
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-11-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets.
Estimating density of Florida Key deer
Roberts, Clay Walton
2006-08-16
Florida Key deer (Odocoileus virginianus clavium) were listed as endangered by the U.S. Fish and Wildlife Service (USFWS) in 1967. A variety of survey methods have been used in estimating deer density and/or changes in population trends...
ADAPTIVE DENSITY ESTIMATION WITH MASSIVE DATA SETS
Scott, David W.
ADAPTIVE DENSITY ESTIMATION WITH MASSIVE DATA SETS David W. Scott, Rice University Masahiko Sagae crossvalidation criterion. For massive data sets, the promise of having sufficient data to do lo cally will be provided. 1. Challenge of Massive Data Massive data sets (MDS) represent one of the grand challenge
Multiscale Poisson Intensity and Density Estimation
Nowak, Robert
1 Multiscale Poisson Intensity and Density Estimation R. M. Willett, Member, IEEE, and R. D. Nowak: R. Willett. R. Nowak (nowak@engr.wisc.edu) was supported by the National Science Foundation, grants is with the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison and R. Willett
Estimating animal population density using passive acoustics.
Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L
2013-05-01
Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144
Estimating animal population density using passive acoustics
Marques, Tiago A; Thomas, Len; Martin, Stephen W; Mellinger, David K; Ward, Jessica A; Moretti, David J; Harris, Danielle; Tyack, Peter L
2013-01-01
Reliable estimation of the size or density of wild animal populations is very important for effective wildlife management, conservation and ecology. Currently, the most widely used methods for obtaining such estimates involve either sighting animals from transect lines or some form of capture-recapture on marked or uniquely identifiable individuals. However, many species are difficult to sight, and cannot be easily marked or recaptured. Some of these species produce readily identifiable sounds, providing an opportunity to use passive acoustic data to estimate animal density. In addition, even for species for which other visually based methods are feasible, passive acoustic methods offer the potential for greater detection ranges in some environments (e.g. underwater or in dense forest), and hence potentially better precision. Automated data collection means that surveys can take place at times and in places where it would be too expensive or dangerous to send human observers. Here, we present an overview of animal density estimation using passive acoustic data, a relatively new and fast-developing field. We review the types of data and methodological approaches currently available to researchers and we provide a framework for acoustics-based density estimation, illustrated with examples from real-world case studies. We mention moving sensor platforms (e.g. towed acoustics), but then focus on methods involving sensors at fixed locations, particularly hydrophones to survey marine mammals, as acoustic-based density estimation research to date has been concentrated in this area. Primary among these are methods based on distance sampling and spatially explicit capture-recapture. The methods are also applicable to other aquatic and terrestrial sound-producing taxa. We conclude that, despite being in its infancy, density estimation based on passive acoustic data likely will become an important method for surveying a number of diverse taxa, such as sea mammals, fish, birds, amphibians, and insects, especially in situations where inferences are required over long periods of time. There is considerable work ahead, with several potentially fruitful research areas, including the development of (i) hardware and software for data acquisition, (ii) efficient, calibrated, automated detection and classification systems, and (iii) statistical approaches optimized for this application. Further, survey design will need to be developed, and research is needed on the acoustic behaviour of target species. Fundamental research on vocalization rates and group sizes, and the relation between these and other factors such as season or behaviour state, is critical. Evaluation of the methods under known density scenarios will be important for empirically validating the approaches presented here. PMID:23190144
DENSITY ESTIMATION FOR PROJECTED EXOPLANET QUANTITIES
Brown, Robert A.
2011-05-20
Exoplanet searches using radial velocity (RV) and microlensing (ML) produce samples of 'projected' mass and orbital radius, respectively. We present a new method for estimating the probability density distribution (density) of the unprojected quantity from such samples. For a sample of n data values, the method involves solving n simultaneous linear equations to determine the weights of delta functions for the raw, unsmoothed density of the unprojected quantity that cause the associated cumulative distribution function (CDF) of the projected quantity to exactly reproduce the empirical CDF of the sample at the locations of the n data values. We smooth the raw density using nonparametric kernel density estimation with a normal kernel of bandwidth {sigma}. We calibrate the dependence of {sigma} on n by Monte Carlo experiments performed on samples drawn from a theoretical density, in which the integrated square error is minimized. We scale this calibration to the ranges of real RV samples using the Normal Reference Rule. The resolution and amplitude accuracy of the estimated density improve with n. For typical RV and ML samples, we expect the fractional noise at the PDF peak to be approximately 80 n{sup -log2}. For illustrations, we apply the new method to 67 RV values given a similar treatment by Jorissen et al. in 2001, and to the 308 RV values listed at exoplanets.org on 2010 October 20. In addition to analyzing observational results, our methods can be used to develop measurement requirements-particularly on the minimum sample size n-for future programs, such as the microlensing survey of Earth-like exoplanets recommended by the Astro 2010 committee.
Fast wavelet based algorithms for linear evolution equations
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
Coding sequence density estimation via topological pressure.
Koslicki, David; Thompson, Daniel J
2015-01-01
We give a new approach to coding sequence (CDS) density estimation in genomic analysis based on the topological pressure, which we develop from a well known concept in ergodic theory. Topological pressure measures the 'weighted information content' of a finite word, and incorporates 64 parameters which can be interpreted as a choice of weight for each nucleotide triplet. We train the parameters so that the topological pressure fits the observed coding sequence density on the human genome, and use this to give ab initio predictions of CDS density over windows of size around 66,000 bp on the genomes of Mus Musculus, Rhesus Macaque and Drososphilia Melanogaster. While the differences between these genomes are too great to expect that training on the human genome could predict, for example, the exact locations of genes, we demonstrate that our method gives reasonable estimates for the 'coarse scale' problem of predicting CDS density. Inspired again by ergodic theory, the weightings of the nucleotide triplets obtained from our training procedure are used to define a probability distribution on finite sequences, which can be used to distinguish between intron and exon sequences from the human genome of lengths between 750 and 5,000 bp. At the end of the paper, we explain the theoretical underpinning for our approach, which is the theory of Thermodynamic Formalism from the dynamical systems literature. Mathematica and MATLAB implementations of our method are available at http://sourceforge.net/projects/topologicalpres/ . PMID:24448658
A Wavelet-Based Assessment of Topographic-Isostatic Reductions for GOCE Gravity Gradients
NASA Astrophysics Data System (ADS)
Grombein, Thomas; Luo, Xiaoguang; Seitz, Kurt; Heck, Bernhard
2014-07-01
Gravity gradient measurements from ESA's satellite mission Gravity field and steady-state Ocean Circulation Explorer (GOCE) contain significant high- and mid-frequency signal components, which are primarily caused by the attraction of the Earth's topographic and isostatic masses. In order to mitigate the resulting numerical instability of a harmonic downward continuation, the observed gradients can be smoothed with respect to topographic-isostatic effects using a remove-compute-restore technique. For this reason, topographic-isostatic reductions are calculated by forward modeling that employs the advanced Rock-Water-Ice methodology. The basis of this approach is a three-layer decomposition of the topography with variable density values and a modified Airy-Heiskanen isostatic concept incorporating a depth model of the Mohorovi?i? discontinuity. Moreover, tesseroid bodies are utilized for mass discretization and arranged on an ellipsoidal reference surface. To evaluate the degree of smoothing via topographic-isostatic reduction of GOCE gravity gradients, a wavelet-based assessment is presented in this paper and compared with statistical inferences in the space domain. Using the Morlet wavelet, continuous wavelet transforms are applied to measured GOCE gravity gradients before and after reducing topographic-isostatic signals. By analyzing a representative data set in the Himalayan region, an employment of the reductions leads to significantly smoothed gradients. In addition, smoothing effects that are invisible in the space domain can be detected in wavelet scalograms, making a wavelet-based spectral analysis a powerful tool.
Bird population density estimated from acoustic signals
Dawson, D.K.; Efford, M.G.
2009-01-01
Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.
Traffic characterization and modeling of wavelet-based VBR encoded video
Yu Kuo; Jabbari, B.; Zafar, S.
1997-07-01
Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.
Wavelet Based Volatility Clustering Estimation of Foreign Exchange Rates
A. N. Sekar Iyengar
2009-10-01
We have presented a novel technique of detecting intermittencies in a financial time series of the foreign exchange rate data of U.S.- Euro dollar(US/EUR) using a combination of both statistical and spectral techniques. This has been possible due to Continuous Wavelet Transform (CWT) analysis which has been popularly applied to fluctuating data in various fields science and engineering and is also being tried out in finance and economics. We have been able to qualitatively identify the presence of nonlinearity and chaos in the time series of the foreign exchange rates for US/EURO (United States dollar to Euro Dollar) and US/UK (United States dollar to United Kingdom Pound) currencies. Interestingly we find that for the US-INDIA(United States dollar to Indian Rupee) foreign exchange rates, no such chaotic dynamics is observed. This could be a result of the government control over the foreign exchange rates, instead of the market controlling them.
Krug, R; Carballido-Gamio, J; Burghardt, A; Haase, S; Sedat, J W; Moss, W C; Majumdar, S
2005-04-11
Trabecular bone structure and bone density contribute to the strength of bone and are important in the study of osteoporosis. Wavelets are a powerful tool to characterize and quantify texture in an image. In this study the thickness of trabecular bone was analyzed in 8 cylindrical cores of the vertebral spine. Images were obtained from 3 Tesla (T) magnetic resonance imaging (MRI) and micro-computed tomography ({micro}CT). Results from the wavelet based analysis of trabecular bone were compared with standard two-dimensional structural parameters (analogous to bone histomorphometry) obtained using mean intercept length (MR images) and direct 3D distance transformation methods ({micro}CT images). Additionally, the bone volume fraction was determined from MR images. We conclude that the wavelet based analyses delivers comparable results to the established MR histomorphometric measurements. The average deviation in trabecular thickness was less than one pixel size between the wavelet and the standard approach for both MR and {micro}CT analysis. Since the wavelet based method is less sensitive to image noise, we see an advantage of wavelet analysis of trabecular bone for MR imaging when going to higher resolution.
EEG analysis using wavelet-based information tools.
Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A
2006-06-15
Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity. PMID:16675027
Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms
NASA Astrophysics Data System (ADS)
Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.
2013-02-01
The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.
Characterizing cerebrovascular dynamics with the wavelet-based multifractal formalism
NASA Astrophysics Data System (ADS)
Pavlov, A. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Sindeev, S. S.; Pavlova, O. N.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.
2016-01-01
Using the wavelet-transform modulus maxima (WTMM) approach we study the dynamics of cerebral blood flow (CBF) in rats aiming to reveal responses of macro- and microcerebral circulations to changes in the peripheral blood pressure. We show that the wavelet-based multifractal formalism allows quantifying essentially different reactions in the CBF-dynamics at the level of large and small cerebral vessels. We conclude that unlike the macrocirculation that is nearly insensitive to increased peripheral blood pressure, the microcirculation is characterized by essential changes of the CBF-complexity.
A Wavelet-Based Approach to Fall Detection
Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo
2015-01-01
Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719
A wavelet-based approach to fall detection.
Palmerini, Luca; Bagalà, Fabio; Zanetti, Andrea; Klenk, Jochen; Becker, Clemens; Cappello, Angelo
2015-01-01
Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the "prototype fall".In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases) in order to improve the performance of fall detection algorithms. PMID:26007719
Wavelet-based moment invariants for pattern recognition
NASA Astrophysics Data System (ADS)
Chen, Guangyi; Xie, Wenfang
2011-07-01
Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.
A New Wavelet Based Approach to Assess Hydrological Models
NASA Astrophysics Data System (ADS)
Adamowski, J. F.; Rathinasamy, M.; Khosa, R.; Nalley, D.
2014-12-01
In this study, a new wavelet based multi-scale performance measure (Multiscale Nash Sutcliffe Criteria, and Multiscale Normalized Root Mean Square Error) for hydrological model comparison was developed and tested. The new measure provides a quantitative measure of model performance across different timescales. Model and observed time series are decomposed using the a trous wavelet transform, and performance measures of the model are obtained at each time scale. The usefulness of the new measure was tested using real as well as synthetic case studies. The real case studies included simulation results from the Soil Water Assessment Tool (SWAT), as well as statistical models (the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods). Data from India and Canada were used. The synthetic case studies included different kinds of errors (e.g., timing error, as well as under and over prediction of high and low flows) in outputs from a hydrologic model. It was found that the proposed wavelet based performance measures (i.e., MNSC and MNRMSE) are a more reliable measure than traditional performance measures such as the Nash Sutcliffe Criteria, Root Mean Square Error, and Normalized Root Mean Square Error. It was shown that the new measure can be used to compare different hydrological models, as well as help in model calibration.
Review of methods for estimating cetacean density from passive
Thomas, Len
Review of methods for estimating cetacean density from passive acoustics Len Thomas and Tiago Marques SIO Symposium: Estimating cetacean density from Passive Acoustics 16th July 2009 www for estimating the density of cetaceans from fixed passive acoustic devices. Methods should be applicable
Density Estimation with Stagewise Optimization of the Empirical Risk
KlemelÃ¤, Jussi
Density Estimation with Stagewise Optimization of the Empirical Risk Jussi KlemelË?a University +49 621 1811931 July 12, 2006 Abstract We consider multivariate density estimation with identically distributed observations. We study a density estimator which is a convex comÂ bination of functions
DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda
Magdon-Ismail, Malik
DENSITY ESTIMATION AND RANDOM VARIATE GENERATION USING MULTILAYER NETWORKS \\Lambda Malik Magdon Abstract In this paper we consider two important topics: density estimation and random variate generation. First, we develop two new methods for density estimation, a stochastic method and a related
Density Estimation with Stagewise Optimization of the Empirical Risk
KlemelÃ¤, Jussi
Density Estimation with Stagewise Optimization of the Empirical Risk Jussi KlemelÂ¨a University +49 621 1811931 July 12, 2006 Abstract We consider multivariate density estimation with identically distributed observations. We study a density estimator which is a convex com- bination of functions
Kernel density estimation using graphical processing unit
NASA Astrophysics Data System (ADS)
Sunarko, Su'ud, Zaki
2015-09-01
Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.
Hernández, Eugenio
Democracy functions of wavelet bases in general Lorentz spaces Gustavo Garrig´os, Eugenio Hern´andez, Maria de Natividade Abstract We compute the democracy functions associated with wavelet bases in general and upper democracy functions; that is, h (N) = inf #=N Q Q Q X and hr(N) = sup #=N Q Q Q X , (1.2) where {Q
WAVELET-BASED DISTRIBUTED SOURCE CODING OF VIDEO James E. Fowler
Fowler, James E.
ENST Paris Paris, France ABSTRACT A wavelet-based video coder built on the principles of distributed, the proposed wavelet-based algorithm significantly outperforms a similar technique constructed with JPEG-theoretic principles that describe source coding with side information known only to the decoder [1] are the theo
WAVELET-BASED DISTRIBUTED SOURCE CODING OF VIDEO James E. Fowler
Tagliasacchi, Marco
ENST Paris Paris, France ABSTRACT A wavelet-based video coder built on the principles of distributed, the proposed wavelet-based algorithm outperforms a similar tech- nique constructed with JPEG-like intraframe of and compensation for frame-to-frame motion--to the de- coder. Information-theoretic principles that describe source
Wavelet-based image analysis system for soil texture analysis
NASA Astrophysics Data System (ADS)
Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John
2003-05-01
Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.
Density estimation with multivariate histograms and best basis selection
KlemelÃ¤, Jussi
Density estimation with multivariate histograms and best basis selection Jussi KlemelÂ¨a Department@rumms.uni-mannheim.de Fax +49 621 1811931 November 17, 2006 Abstract We consider estimation of multivariate densities the optimal amount of presmoothing depends on the spatial inhomogeneity of the density. Mathematics Subject
Force Estimation and Prediction from Time-Varying Density Images
Ratilal, Purnima
We present methods for estimating forces which drive motion observed in density image sequences. Using these forces, we also present methods for predicting velocity and density evolution. To do this, we formulate and apply ...
Review of methods for estimating cetacean density from passivecetacean density from passive
Thomas, Len
Review of methods for estimating cetacean density from passivecetacean density from passive Using Passive Acoustics 13th September 2009 #12;· 3-year project: May 2007-2010 · Objectives:· Objectives: 1. Develop methods for estimating the density of cetaceans from fixed passive acoustic
ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY
Jamieson, Bruce
ESTIMATING THE DENSITY OF DRY SNOW LAYERS FROM HARDNESS, AND HARDNESS FROM DENSITY Daehyun Kim 1 and hardness of dry snow layers for common grain types. These relations have been widely used to estimate), and to estimate the hardness of layers in snowpack evolution models. Since 2000, the database of snow layers has
Coarse-to-fine wavelet-based airport detection
NASA Astrophysics Data System (ADS)
Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun
2015-10-01
Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.
A wavelet-based method for multispectral face recognition
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian
2012-06-01
A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.
Experimental and numerical evaluation of wavelet based damage detection methodologies
NASA Astrophysics Data System (ADS)
Quiñones, Mireya M.; Montejo, Luis A.; Jang, Shinae
2015-03-01
This article presents an evaluation of the capabilities of wavelet-based methodologies for damage identification in civil structures. Two different approaches were evaluated: (1) analysis of the structure frequencies evolution by means of the continuous wavelet transform and (2) analysis of the singularities generated in the high frequency response of the structure through the detail functions obtained via fast wavelet transform. The methodologies were evaluated using experimental and numerical simulated data. It was found that the selection of appropriate wavelet parameters is critical for a successful analysis of the signal. Wavelet parameters should be selected based on the expected frequency content of the signal and desired time and frequency resolutions. Identifications of frequency shifts via ridge extraction of the wavelet map were successful in most of the experimental and numerical scenarios investigated. Moreover, the frequency shift can be inferred most of the time but the exact time at which it occurs is not evident. However, this information can be retrieved from the spike location from the Fast Wavelet Transform analysis. Therefore, it is recommended to perform both type of analysis and look at the results together.
Wavelet-based characterization of gait signal for neurological abnormalities.
Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S
2015-02-01
Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. PMID:25661004
Wavelet-based face verification for constrained platforms
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2005-03-01
Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.
Density estimation using the trapping web design: A geometric analysis
Link, W.A.; Barker, R.J.
1994-01-01
Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
Morphology driven density distribution estimation for small bodies
NASA Astrophysics Data System (ADS)
Takahashi, Yu; Scheeres, D. J.
2014-05-01
We explore methods to detect and characterize the internal mass distribution of small bodies using the gravity field and shape of the body as data, both of which are determined from orbit determination process. The discrepancies in the spherical harmonic coefficients are compared between the measured gravity field and the gravity field generated by homogeneous density assumption. The discrepancies are shown for six different heterogeneous density distribution models and two small bodies, namely 1999 KW4 and Castalia. Using these differences, a constraint is enforced on the internal density distribution of an asteroid, creating an archive of characteristics associated with the same-degree spherical harmonic coefficients. Following the initial characterization of the heterogeneous density distribution models, a generalized density estimation method to recover the hypothetical (i.e., nominal) density distribution of the body is considered. We propose this method as the block density estimation, which dissects the entire body into small slivers and blocks, each homogeneous within itself, to estimate their density values. Significant similarities are observed between the block model and mass concentrations. However, the block model does not suffer errors from shape mismodeling, and the number of blocks can be controlled with ease to yield a unique solution to the density distribution. The results show that the block density estimation approximates the given gravity field well, yielding higher accuracy as the resolution of the density map is increased. The estimated density distribution also computes the surface potential and acceleration within 10% for the particular cases tested in the simulations, the accuracy that is not achievable with the conventional spherical harmonic gravity field. The block density estimation can be a useful tool for recovering the internal density distribution of small bodies for scientific reasons and for mapping out the gravity field environment in close proximity to small body’s surface for accurate trajectory/safe navigation purposes to be used for future missions.
Wavelet-based AR-SVM for health monitoring of smart structures
NASA Astrophysics Data System (ADS)
Kim, Yeesock; Chong, Jo Woon; Chon, Ki H.; Kim, JungMi
2013-01-01
This paper proposes a novel structural health monitoring framework for damage detection of smart structures. The framework is developed through the integration of the discrete wavelet transform, an autoregressive (AR) model, damage-sensitive features, and a support vector machine (SVM). The steps of the method are the following: (1) the wavelet-based AR (WAR) model estimates vibration signals obtained from both the undamaged and damaged smart structures under a variety of random signals; (2) a new damage-sensitive feature is formulated in terms of the AR parameters estimated from the structural velocity responses; and then (3) the SVM is applied to each group of damaged and undamaged data sets in order to optimally separate them into either damaged or healthy groups. To demonstrate the effectiveness of the proposed structural health monitoring framework, a three-story smart building equipped with a magnetorheological (MR) damper under artificial earthquake signals is studied. It is shown from the simulation that the proposed health monitoring scheme is effective in detecting damage of the smart structures in an efficient way.
Estimating the central densities of stellar systems
NASA Astrophysics Data System (ADS)
Merritt, David
1988-02-01
The sensitivity of King's (1966) core-fitting formula to velocity anisotropy is discussed. For stable, spherical models, King's formula can overestimate the central density by at least 50 percent. For nonspherical models, the error can be 150 percent or more. In all cases, the sensitivity of the core-fitting formula to anisotropy can be reduced somewhat if velocity dispersions are averaged over the inner one or two core radii.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
Embedded wavelet-based face recognition under variable position
NASA Astrophysics Data System (ADS)
Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi
2015-02-01
For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies--both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
Lower crustal density estimation using the density-slowness relationship: a preliminary study
Jones, Gary Wayne
1996-01-01
-facies metamorphic rocks. Velocity-density data was compiled from the literature for pressures greater than 600 MPa and linear fits of density on slowness were made. No correction was made for the effect of temperature. Densities were then estimated for a number...
A Wavelet-Based Noise Reduction Algorithm and Its Clinical Evaluation in Cochlear Implants
Ye, Hua; Deng, Guang; Mauger, Stefan J.; Hersbach, Adam A.; Dawson, Pam W.; Heasman, John M.
2013-01-01
Noise reduction is often essential for cochlear implant (CI) recipients to achieve acceptable speech perception in noisy environments. Most noise reduction algorithms applied to audio signals are based on time-frequency representations of the input, such as the Fourier transform. Algorithms based on other representations may also be able to provide comparable or improved speech perception and listening quality improvements. In this paper, a noise reduction algorithm for CI sound processing is proposed based on the wavelet transform. The algorithm uses a dual-tree complex discrete wavelet transform followed by shrinkage of the wavelet coefficients based on a statistical estimation of the variance of the noise. The proposed noise reduction algorithm was evaluated by comparing its performance to those of many existing wavelet-based algorithms. The speech transmission index (STI) of the proposed algorithm is significantly better than other tested algorithms for the speech-weighted noise of different levels of signal to noise ratio. The effectiveness of the proposed system was clinically evaluated with CI recipients. A significant improvement in speech perception of 1.9 dB was found on average in speech weighted noise. PMID:24086605
Fast wavelet-based image characterization for highly adaptive image retrieval.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Cochener, Béatrice; Roux, Christian
2012-04-01
Adaptive wavelet-based image characterizations have been proposed in previous works for content-based image retrieval (CBIR) applications. In these applications, the same wavelet basis was used to characterize each query image: This wavelet basis was tuned to maximize the retrieval performance in a training data set. We take it one step further in this paper: A different wavelet basis is used to characterize each query image. A regression function, which is tuned to maximize the retrieval performance in the training data set, is used to estimate the best wavelet filter, i.e., in terms of expected retrieval performance, for each query image. A simple image characterization, which is based on the standardized moments of the wavelet coefficient distributions, is presented. An algorithm is proposed to compute this image characterization almost instantly for every possible separable or nonseparable wavelet filter. Therefore, using a different wavelet basis for each query image does not considerably increase computation times. On the other hand, significant retrieval performance increases were obtained in a medical image data set, a texture data set, a face recognition data set, and an object picture data set. This additional flexibility in wavelet adaptation paves the way to relevance feedback on image characterization itself and not simply on the way image characterizations are combined. PMID:22194244
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Yaqub, Maqsood; Boellaard, Ronald; Schuitemaker, Alie; van Berckel, Bart N M; Lammertsmae, Adriaan A
2008-11-01
The purpose of the present study was to investigate the use of various wavelets based techniques for denoising of [11C](R)-PK11195 time activity curves (TACs) in order to improve accuracy and precision of PET kinetic parameters, such as volume of distribution (V(T)) and distribution volume ratio with reference region (DVR). Simulated and clinical TACs were filtered using two different categories of wavelet filters: (1) wave shrinking thresholds using a constant or a newly developed time varying threshold and (2) "statistical" filters, which filter extreme wavelet coefficients using a set of "calibration" TACs. PET pharmacokinetic parameters were estimated using linear models (plasma Logan and reference Logan analyses). For simulated noisy TACs, optimized wavelet based filters improved the residual sum of squared errors with the original noise free TACs. Furthermore, both clinical results and simulations were in agreement. Plasma Logan V(T) values increased after filtering, but no differences were seen in reference Logan DVR values. This increase in plasma Logan V(T) suggests a reduction of noise induced bias by wavelet based denoising, as was seen in the simulations. Wavelet denoising of TACs for [11C](R)-PK11195 PET studies is therefore useful when parametric Logan based V(T) is the parameter of interest. PMID:19070241
Mean thermospheric density estimation derived from satellite constellations
NASA Astrophysics Data System (ADS)
Li, Alan; Close, Sigrid
2015-10-01
This paper defines a method to estimate the mean neutral density of the thermosphere given many satellites of the same form factor travelling in similar regions of space. A priori information to the estimation scheme include ranging measurements and a general knowledge of the onboard ADACS, although precise measurements are not required for the latter. The estimation procedure seeks to utilize order statistics to estimate the probability of the minimum drag coefficient achievable, and amalgamating all measurements across multiple time periods allows estimation of the probability density of the ballistic factor itself. The model does not depend on prior models of the atmosphere; instead we require estimation of the minimum achievable drag coefficient which is based upon physics models of simple shapes in free molecular flow. From the statistics of the minimum, error statistics on the estimated atmospheric density can be calculated. Barring measurement errors from the ranging procedure itself, it is shown that with a constellation of 10 satellites, we can achieve a standard deviation of roughly 4% on the estimated mean neutral density. As more satellites are added to the constellation, the result converges towards the lower limit of the achievable drag coefficient, and accuracy becomes limited by the quality of the ranging measurements and the probability of the accommodation coefficient. Comparisons are made to existing atmospheric models such as NRLMSISE-00 and JB2006.
Density estimation implications of increasing ambient noise on
Thomas, Len
estimate the mean detection probability of detecting the animals or cues of interest. This is often done. This would lead to biased density estimates. Here we evaluate the influence of ambient noise in the detection and classifica- tion of beaked whale clicks at the Atlantic Undersea Test and Evaluation Center (AUTEC
Samb, Rawane
2010-01-01
This paper deals with the nonparametric density estimation of the regression error term assuming its independence with the covariate. The difference between the feasible estimator which uses the estimated residuals and the unfeasible one using the true residuals is studied. An optimal choice of the bandwidth used to estimate the residuals is given. We also study the asymptotic normality of the feasible kernel estimator and its rate-optimality.
Wavelet-based analogous phase scintillation index for high latitudes
NASA Astrophysics Data System (ADS)
Ahmed, A.; Tiwari, R.; Strangeways, H. J.; Dlay, S.; Johnsen, M. G.
2015-08-01
The Global Positioning System (GPS) performance at high latitudes can be severely affected by the ionospheric scintillation due to the presence of small-scale time-varying electron density irregularities. In this paper, an improved analogous phase scintillation index derived using the wavelet-transform-based filtering technique is presented to represent the effects of scintillation regionally at European high latitudes. The improved analogous phase index is then compared with the original analogous phase index and the phase scintillation index for performance comparison using 1 year of data from Trondheim, Norway (63.41°N, 10.4°E). This index provides samples at a 1 min rate using raw total electron content (TEC) data at 1 Hz for the prediction of phase scintillation compared to the scintillation monitoring receivers (such as NovAtel Global Navigation Satellite Systems Ionospheric Scintillation and TEC Monitor receivers) which operate at 50 Hz rate and are thus rather computationally intensive. The estimation of phase scintillation effects using high sample rate data makes the improved analogous phase index a suitable candidate which can be used in regional geodetic dual-frequency-based GPS receivers to efficiently update the tracking loop parameters based on tracking jitter variance.
Tractable multivariate binary density estimation and the restricted Boltzmann forest.
Larochelle, Hugo; Bengio, Yoshua; Turian, Joseph
2010-09-01
We investigate the problem of estimating the density function of multivariate binary data. In particular, we focus on models for which computing the estimated probability of any data point is tractable. In such a setting, previous work has mostly concentrated on mixture modeling approaches. We argue that for the problem of tractable density estimation, the restricted Boltzmann machine (RBM) provides a competitive framework for multivariate binary density modeling. With this in mind, we also generalize the RBM framework and present the restricted Boltzmann forest (RBForest), which replaces the binary variables in the hidden layer of RBMs with groups of tree-structured binary variables. This extension allows us to obtain models that have more modeling capacity but remain tractable. In experiments on several data sets, we demonstrate the competitiveness of this approach and study some of its properties. PMID:20569177
Quantiles, parametric-select density estimation, and bi-information parameter estimators
NASA Technical Reports Server (NTRS)
Parzen, E.
1982-01-01
A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.
EFFICIENT NONPARAMETRIC DENSITY ESTIMATION ON THE SPHERE WITH APPLICATIONS IN FLUID MECHANICS
Egecioglu, Ömer
EFFICIENT NONPARAMETRIC DENSITY ESTIMATION ON THE SPHERE WITH APPLICATIONS IN FLUID MECHANICS ¨OMER density, nonparametric estimation, fluid mechanics, convergence, kernel method, efficient algorithm AMS, an important application of nonparametric density estimation is in computational fluid mechanics. When the flow
NONPARAMETRIC ESTIMATION OF MULTIVARIATE CONVEX-TRANSFORMED DENSITIES
Seregin, Arseni; Wellner, Jon A.
2011-01-01
We study estimation of multivariate densities p of the form p(x) = h(g(x)) for x ? ?d and for a fixed monotone function h and an unknown convex function g. The canonical example is h(y) = e?y for y ? ?; in this case, the resulting class of densities P(e?y)={p=exp(?g):gis convex}is well known as the class of log-concave densities. Other functions h allow for classes of densities with heavier tails than the log-concave class. We first investigate when the maximum likelihood estimator p? exists for the class P(h) for various choices of monotone transformations h, including decreasing and increasing functions h. The resulting models for increasing transformations h extend the classes of log-convex densities studied previously in the econometrics literature, corresponding to h(y) = exp(y). We then establish consistency of the maximum likelihood estimator for fairly general functions h, including the log-concave class P(e?y) and many others. In a final section, we provide asymptotic minimax lower bounds for the estimation of p and its vector of derivatives at a fixed point x0 under natural smoothness hypotheses on h and g. The proofs rely heavily on results from convex analysis. PMID:21423877
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Estimate of snow density knowing grain and share hardness
NASA Astrophysics Data System (ADS)
Valt, Mauro; Cianfarra, Paola; Cagnati, Anselmo; Chiambretti, Igor; Moro, Daniele
2010-05-01
Alpine avalanche warning services produces, weekly, snow profiles. Usually such profiles are made in horizontal snow fields, homogenously distributed by altitude and climatic micro-areas. Such profile allows grain shape, dimension and hardness (hand test) identification. Horizontal coring of each layer allows snow density identification. Such data allows the avalanche hazard evaluation and an estimation of the Snow Water Equivalent (SWE). Nevertheless the measurement of snow density, by coring, of very thin layers (less than 5 cm of thickness) is very difficult and are usually not measured by snow technicians. To bypass such problems a statistical analysis was performed to assign density values also to layers which cannot be measured. This system allows, knowing each layer thickness and its density, to correctly estimate SWE. This paper presents typical snow density values for snow hardness values and grain types for the Eastern Italian Alps. The study is based onto 2500 snow profiles with 17000 sampled snow layers from the Dolomites and Venetian Prealps (Eastern Alps). The table of typical snow density values for each grain type is used by YETI Software which elaborate snow profiles and automatically evaluate SWE. This method allows a better use of Avalanche Warning Services datasets for SWE estimation and local evaluation of SWE yearly trends for each snow field.
Comparison of parzen density and frequency histogram as estimators of probability density functions.
Glavinovi?, M I
1996-01-01
In neurobiology, and in other fields, the frequency histogram is a traditional tool for determining the probability density function (pdf) of random processes, although other methods have been shown to be more efficient as their estimators. In this study, the frequency histogram is compared with the Parzen density estimator, a method that consists of convolving each measurement with a weighting function of choice (Gaussian, rectangular, etc) and using their sum as an estimate of the pdf of the random process. The difference in their performance in evaluating two types of pdfs that occur commonly in quantal analysis (monomodal and multimodal with equidistant peaks) is demonstrated numerically by using the integrated square error criterion and assuming a knowledge of the "true" pdf. The error of the Parzen density estimates decreases faster as a function of the number of observations than that of the frequency histogram, indicating that they are asymptotically more efficient. A variety of "reasonable" weighting functions can provide similarly efficient Parzen density estimates, but their efficiency greatly depends on their width. The optimal widths determined using the integrated square error criterion, the harmonic analysis (applicable only to multimodal pdfs with equidistant peaks), and the "test graphs" (the graphs of the second derivatives of the Parzen density estimates that do not assume a knowledge of the "true" pdf, but depend on the distinction between the "essential features" of the pdf and the "random fluctuations") were compared and found to be similar. PMID:9019720
Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds
NASA Astrophysics Data System (ADS)
Arshakian, Tigran G.; Ossenkopf, Volker
2016-01-01
Aims: We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well-defined spatial properties, we calibrate the method and test its limitations to apply it ultimately to a set of observed maps. Methods: We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. Results: We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behaviour without prominent scales. Observational noise and a finite map size limit the scales on which the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from observational noise is negligible for signal-to-noise ratios ?5. We propose an approach for the identification of correlated structures in the maps, which allows us to localize individual correlated structures and recognize their shapes and suggest a recipe for recovering enhanced scales in self-similar structures. The application of the WWCC to the observed line maps of the giant molecular cloud G 333 allows us to add specific scale information to the results obtained earlier using the principal component analysis. The WWCC confirms the chemical and excitation similarity of 13CO and C18O on all scales, but shows a deviation of HCN at scales of up to 7 pc. This can be interpreted as a chemical transition scale. The largest structures also show a systematic offset along the filament, probably due to a large-scale density gradient. Conclusions: The WWCC can compare correlated structures in different maps of molecular clouds identifying scales that represent structural changes, such as chemical and phase transitions and prominent or enhanced dimensions.
Estimating cosmic velocity fields from density fields and tidal tensors
NASA Astrophysics Data System (ADS)
Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan
2012-10-01
In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ?5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1? confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (? > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.
header for SPIE use Diagnostically lossless medical image compression via wavelet-based
Qi, Xiaojun
header for SPIE use Diagnostically lossless medical image compression via wavelet-based background are essential in archival and communication of medical images. In this paper, an automated wavelet compression, wavelet transform modulus maxima, convex hull, noise 1. INTRODUCTION The trend in medical imaging
A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images
Rajpoot, Nasir
A Robust Adaptive Wavelet-based Method for Classification of Meningioma Histology Images Hammad of samples is an im- portant problem in the domain of histological image classification. This issue is inherent to the field due to the high complexity of histology im- age data. A technique that provides good
WATER RESOURCES RESEARCH, VOL. ???, XXXX, DOI:10.1029/, Wavelet-Based Multiresolution Analysis1
Percival, Don
WATER RESOURCES RESEARCH, VOL. ???, XXXX, DOI:10.1029/, Wavelet-Based Multiresolution Analysis1 of Wivenhoe Dam Water Temperatures2 D. B. Percival, 1 S. M. Lennox, 2 Y.-G. Wang, 2,3 and R. E. Darnell 2 S. M for Applications in Natural Resource Mathematics (CARM), School of Mathematics and Physics, University
WAVELET-BASED IMAGE COMPRESSION ANTI-FORENSICS Matthew C. Stamm and K. J. Ray Liu
Liu, K. J. Ray
WAVELET-BASED IMAGE COMPRESSION ANTI-FORENSICS Matthew C. Stamm and K. J. Ray Liu Dept be modified with relative ease, con- siderable effort has been spent developing image forensic algorithms been given to anti-forensic oper- ations designed to mislead forensic techniques. In this paper, we
A Thresholded Landweber Algorithm for Wavelet-based Sparse Poisson Deconvolution
Kingsbury, Nick
A Thresholded Landweber Algorithm for Wavelet-based Sparse Poisson Deconvolution Ganchi Zhang a new iterative deconvolution algorithm for noisy Poisson images based on wavelet sparse regularization good solution for 3D microscopy deconvolution. I. NOTATIONS In this document we use the following
WAVELET BASED INVERSION OF POTENTIAL FIELD DATA Registration number F035
Boschetti, Fabio
WAVELET BASED INVERSION OF POTENTIAL FIELD DATA Registration number F035 F. Boschetti, P. Hornby and analysis of potential field data represents one of the cheapest forms of geophysical exploration of approach can be generically defined as inversion. Here more or less sophisticated algorithm are employed
Deymier, Pierre
Wavelet-based spatial and temporal multiscaling: Bridging the atomistic and continuum space and time scales G. Frantziskonis1, * and P. Deymier2 1 Department of Civil Engineering and Engineering that naturally addresses time scaling in addition to spatial scaling. The method combines recently developed
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved. PMID:22919919
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Contributed Paper Estimating the Density of Honeybee Colonies across
Paxton, Robert
Contributed Paper Estimating the Density of Honeybee Colonies across Their Natural Range to Fill, University of Pretoria, Pretoria 0002, South Africa §Honeybee Research Section, ARC-Plant Protection Research, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because
Face Value: Towards Robust Estimates of Snow Leopard Densities.
Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682
Face Value: Towards Robust Estimates of Snow Leopard Densities
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682
Scatterer Number Density Considerations in Reference Phantom Based Attenuation Estimation
Rubert, Nicholas; Varghese, Tomy
2014-01-01
Attenuation estimation and imaging has the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and an SNR approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1,647 ROI's in 5 ex vivo bovine livers we find an envelope SNR of 1.10 ± 0.12 when imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article we examine attenuation estimation in numerical phantoms, TM phantoms with variable SND's, and ex vivo bovine liver prior to and following thermal coagulation. We find that reference phantom based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SND, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find the standard deviation of attenuation slope estimates increases from 0.07 dB/cm MHz to 0.25 dB/cm MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in TM phantoms with a large estimation kernel size (16 mm axially by 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (< 0.01 dB/cm MHz). We also compare results obtained with reference phantom based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800
The Effect of Lidar Point Density on LAI Estimation
NASA Astrophysics Data System (ADS)
Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.
2013-12-01
Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful in forest inventory applications and tree health evaluations. However, such high-density data is often not available over large areas. In this study we progressively downsampled the high-density discrete lidar data and evaluated the effect on LAI estimation. The AccuPAR data was used as validation and results were compared to existing LAI metrics. This will enable us to determine the minimum point density required for airborne lidar LAI retrieval. Preliminary results show that the data may be substantially thinned to estimate site-level LAI. More detailed results will be presented at the conference.
Can modeling improve estimation of desert tortoise population densities?
Nussear, K.E.; Tracy, C.R.
2007-01-01
The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.
California at Berkeley, University of
Correction to ``Electron density estimations derived from spacecraft potential measurements. [1] In the paper ``Electron density estimations derived from spacecraft potential measurements. Selected electron densities measured by WHISPER on Cluster SC1 in the solar wind for the months January
Estimating black bear density using DNA data from hair snares
Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.
2010-01-01
DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.
Volume estimation of multi-density nodules with thoracic CT
NASA Astrophysics Data System (ADS)
Gavrielides, Marios A.; Li, Qin; Zeng, Rongping; Myers, Kyle J.; Sahiner, Berkman; Petrick, Nicholas
2014-03-01
The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of +/-12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
Comparative study of different wavelet based neural network models for rainfall-runoff modeling
NASA Astrophysics Data System (ADS)
Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.
2014-07-01
The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.
Wavelet-based nearest-regularized subspace for noise-robust hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Wei; Liu, Kui; Su, Hongjun
2014-01-01
A wavelet-based nearest-regularized-subspace classifier is proposed for noise-robust hyperspectral image (HSI) classification. The nearest-regularized subspace, coupling the nearest-subspace classification with a distance-weighted Tikhonov regularization, was designed to only consider the original spectral bands. Recent research found that the multiscale wavelet features [e.g., extracted by redundant discrete wavelet transformation (RDWT)] of each hyperspectral pixel are potentially very useful and less sensitive to noise. An integration of wavelet-based features and the nearest-regularized-subspace classifier to improve the classification performance in noisy environments is proposed. Specifically, wealthy noise-robust features provided by RDWT based on hyperspectral spectrum are employed in a decision-fusion system or as preprocessing for the nearest-regularized-subspace (NRS) classifier. Improved performance of the proposed method over the conventional approaches, such as support vector machine, is shown by testing several HSIs. For example, the NRS classifier performed with an accuracy of 65.38% for the AVIRIS Indian Pines data with 75 training samples per class under noisy conditions (signal-to-noise ratio=36.87 dB), while the wavelet-based classifier can obtain an accuracy of 71.60%, resulting in an improvement of approximately 6%.
Effect of Random Clustering on Surface Damage Density Estimates
Matthews, M J; Feit, M D
2007-10-29
Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.
Estimation of Volumetric Breast Density from Digital Mammograms
NASA Astrophysics Data System (ADS)
Alonzo-Proulx, Olivier
Mammographic breast density (MBD) is a strong risk factor for developing breast cancer. MBD is typically estimated by manually selecting the area occupied by the dense tissue on a mammogram. There is interest in measuring the volume of dense tissue, or volumetric breast density (VBD), as it could potentially be a stronger risk factor. This dissertation presents and validates an algorithm to measure the VBD from digital mammograms. The algorithm is based on an empirical calibration of the mammography system, supplemented by physical modeling of x-ray imaging that includes the effects of beam polychromaticity, scattered radation, anti-scatter grid and detector glare. It also includes a method to estimate the compressed breast thickness as a function of the compression force, and a method to estimate the thickness of the breast outside of the compressed region. The algorithm was tested on 26 simulated mammograms obtained from computed tomography images, themselves deformed to mimic the effects of compression. This allowed the determination of the baseline accuracy of the algorithm. The algorithm was also used on 55 087 clinical digital mammograms, which allowed for the determination of the general characteristics of VBD and breast volume, as well as their variation as a function of age and time. The algorithm was also validated against a set of 80 magnetic resonance images, and compared against the area method on 2688 images. A preliminary study comparing association of breast cancer risk with VBD and MBD was also performed, indicating that VBD is a stronger risk factor. The algorithm was found to be accurate, generating quantitative density measurements rapidly and automatically. It can be extended to any digital mammography system, provided that the compression thickness of the breast can be determined accurately.
Accurate photometric redshift probability density estimation - method comparison and application
NASA Astrophysics Data System (ADS)
Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben
2015-10-01
We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ? 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.
Atmospheric turbulence mitigation using complex wavelet-based fusion.
Anantrasirichai, Nantheera; Achim, Alin; Kingsbury, Nick G; Bull, David R
2013-06-01
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. PMID:23475359
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Wavelet-based coherence measures of global seismic noise properties
NASA Astrophysics Data System (ADS)
Lyubushin, A. A.
2015-04-01
The coherent behavior of four parameters characterizing the global field of low-frequency (periods from 2 to 500 min) seismic noise is studied. These parameters include generalized Hurst exponent, multifractal singularity spectrum support width, the normalized entropy of variance, and kurtosis. The analysis is based on the data from 229 broadband stations of GSN, GEOSCOPE, and GEOFON networks for a 17-year period from the beginning of 1997 to the end of 2013. The entire set of stations is subdivided into eight groups, which, taken together, provide full coverage of the Earth. The daily median values of the studied noise parameters are calculated in each group. This procedure yields four 8-dimensional time series with a time step of 1 day with a length of 6209 samples in each scalar component. For each of the four 8-dimensional time series, a multiple correlation measure is estimated, which is based on computing robust canonical correlations for the Haar wavelet coefficients at the first detail level within a moving time window of the length 365 days. These correlation measures for each noise property demonstrate essential increasing starting from 2007 to 2008 which was continued till the end of 2013. Taking into account a well-known phenomenon of noise correlation increasing before catastrophes, this increasing of seismic noise synchronization is interpreted as indicators of the strongest (magnitudes not less than 8.5) earthquakes activation which is observed starting from the Sumatra mega-earthquake of 26 Dec 2004. This synchronization continues growing up to the end of the studied period (2013), which can be interpreted as a probable precursor of the further increase in the intensity of the strongest earthquakes all over the world.
Density estimation on multivariate censored data with optional Pólya tree.
Seok, Junhee; Tian, Lu; Wong, Wing H
2014-01-01
Analyzing the failure times of multiple events is of interest in many fields. Estimating the joint distribution of the failure times in a non-parametric way is not straightforward because some failure times are often right-censored and only known to be greater than observed follow-up times. Although it has been studied, there is no universally optimal solution for this problem. It is still challenging and important to provide alternatives that may be more suitable than existing ones in specific settings. Related problems of the existing methods are not only limited to infeasible computations, but also include the lack of optimality and possible non-monotonicity of the estimated survival function. In this paper, we proposed a non-parametric Bayesian approach for directly estimating the density function of multivariate survival times, where the prior is constructed based on the optional Pólya tree. We investigated several theoretical aspects of the procedure and derived an efficient iterative algorithm for implementing the Bayesian procedure. The empirical performance of the method was examined via extensive simulation studies. Finally, we presented a detailed analysis using the proposed method on the relationship among organ recovery times in severely injured patients. From the analysis, we suggested interesting medical information that can be further pursued in clinics. PMID:23902636
Density estimation on multivariate censored data with optional Pólya tree
Seok, Junhee; Tian, Lu; Wong, Wing H.
2014-01-01
Analyzing the failure times of multiple events is of interest in many fields. Estimating the joint distribution of the failure times in a non-parametric way is not straightforward because some failure times are often right-censored and only known to be greater than observed follow-up times. Although it has been studied, there is no universally optimal solution for this problem. It is still challenging and important to provide alternatives that may be more suitable than existing ones in specific settings. Related problems of the existing methods are not only limited to infeasible computations, but also include the lack of optimality and possible non-monotonicity of the estimated survival function. In this paper, we proposed a non-parametric Bayesian approach for directly estimating the density function of multivariate survival times, where the prior is constructed based on the optional Pólya tree. We investigated several theoretical aspects of the procedure and derived an efficient iterative algorithm for implementing the Bayesian procedure. The empirical performance of the method was examined via extensive simulation studies. Finally, we presented a detailed analysis using the proposed method on the relationship among organ recovery times in severely injured patients. From the analysis, we suggested interesting medical information that can be further pursued in clinics. PMID:23902636
NASA Astrophysics Data System (ADS)
Walia, Suresh Kumar; Patel, Raj Kumar; Vinayak, Hemant Kumar; Parti, Raman
2013-12-01
The objective of this study is to bring out the errors introduced during construction which are overlooked during the physical verification of the bridge. Such errors can be pointed out if the symmetry of the structure is challenged. This paper thus presents the study of downstream and upstream truss of newly constructed steel bridge using time-frequency and wavelet-based approach. The variation in the behavior of truss joints of bridge with variation in the vehicle speed has been worked out to determine their flexibility. The testing on the steel bridge was carried out with the same instrument setup on both the upstream and downstream trusses of the bridge at two different speeds with the same moving vehicle. The nodal flexibility investigation is carried out using power spectral density, short-time Fourier transform, and wavelet packet transform with respect to both the trusses and speed. The results obtained have shown that the joints of both upstream and downstream trusses of the bridge behave in a different manner even if designed for the same loading due to constructional variations and vehicle movement, in spite of the fact that the analytical models present a simplistic model for analysis and design. The difficulty of modal parameter extraction of the particular bridge under study increased with the increase in speed due to decreased excitation time.
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
Bayesian MCMC Bandwidth Estimation on Kernel Density Estimation for Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Lee, T.; Ouarda, T. B.; Lee, J.
2009-05-01
Recent advances in computational capacity allow the use of more sophisticated approaches that require high computational power, such as the technique of importance sampling and Bayesian Markov Chain Monte Carlo (BMCMC) methods. In flood frequency analysis, the use of BMCMC allows to model the uncertainty associated to quantile estimates obtained through the posterior distributions of model parameters. BMCMC models have been used in association with various parametric distributions in the estimation of flood quantiles. However, they have never been applied with nonparametric distributions for the same objective. In this paper, the BMCMC is used for the selection of a bandwidth of the kernel density estimate (KDE) in order to carry out extreme value frequency analysis. KDE has not gained much acceptance in the field of frequency analysis because of its feature of easily dying off away from the occurrence point (low predictive ability). The use of Gamma kernels allow to solve this problem because of the caracteristics of thicker right tails and variable kernel smoothness. Even if the bandwidth is not changed, the gamma kernel permits alternating its variance according to the estimate point. Furthermore, BMCMC provides the uncertainty induced from the bandwidth selection. The predictive ability of the Gamma KDE is investigated with Monte Carlo simulation. Results show the usefulness of the gamma kernel density estimate in flood freuquency analysis.
California at Berkeley, University of
Electron density estimations derived from spacecraft potential measurements on Cluster in tenuous density measurements. The spacecraft photoelectron characteristic (photoelectrons escaping to the plasma. The consequences for plasma density measurements are addressed. Typical examples are presented to demonstrate
Estimating tropical-forest density profiles from multibaseline interferometric SAR
NASA Technical Reports Server (NTRS)
Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio
2006-01-01
Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.
NASA Astrophysics Data System (ADS)
Song, Young?Chul; Choi, Doo?Hyun; Park, Kil?Houm
2006-06-01
This paper proposes a wavelet-based prepossessing method to improve the detecting capacity of a blob-Mura-defect-detecting algorithm. The non-uniformity of the background region is eliminated by replacing the approximation coefficients with a constant value, and the brightness difference between the background region and defect regions is increased by multiplying the detail coefficients and a weighting factor. The proposed method can perfectly control the detectable defect level by properly selecting the defect detecting level. Experimental results demonstrate that the proposed method can effectively enhance blob-Mura defects in thin film transistor liquid crystal display panels.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
Serial identification of EEG patterns using adaptive wavelet-based analysis
NASA Astrophysics Data System (ADS)
Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.
2013-10-01
A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.
Estimating Foreign-Object-Debris Density from Photogrammetry Data
NASA Technical Reports Server (NTRS)
Long, Jason; Metzger, Philip; Lane, John
2013-01-01
Within the first few seconds after launch of STS-124, debris traveling vertically near the vehicle was captured on two 16-mm film cameras surrounding the launch pad. One particular piece of debris caught the attention of engineers investigating the release of the flame trench fire bricks. The question to be answered was if the debris was a fire brick, and if it represented the first bricks that were ejected from the flame trench wall, or was the object one of the pieces of debris normally ejected from the vehicle during launch. If it was typical launch debris, such as SRB throat plug foam, why was it traveling vertically and parallel to the vehicle during launch, instead of following its normal trajectory, flying horizontally toward the north perimeter fence? By utilizing the Runge-Kutta integration method for velocity and the Verlet integration method for position, a method that suppresses trajectory computational instabilities due to noisy position data was obtained. This combination of integration methods provides a means to extract the best estimate of drag force and drag coefficient under the non-ideal conditions of limited position data. This integration strategy leads immediately to the best possible estimate of object density, within the constraints of unknown particle shape. These types of calculations do not exist in readily available off-the-shelf simulation software, especially where photogrammetry data is needed as an input.
Padma, A
2011-01-01
The research work presented in this paper is to achieve the tissue classification and automatically diagnosis the abnormal tumor region present in Computed Tomography (CT) images using the wavelet based statistical texture analysis method. Comparative studies of texture analysis method are performed for the proposed wavelet based texture analysis method and Spatial Gray Level Dependence Method (SGLDM). Our proposed system consists of four phases i) Discrete Wavelet Decomposition (ii) Feature extraction (iii) Feature selection (iv) Analysis of extracted texture features by classifier. A wavelet based statistical texture feature set is derived from normal and tumor regions. Genetic Algorithm (GA) is used to select the optimal texture features from the set of extracted texture features. We construct the Support Vector Machine (SVM) based classifier and evaluate the performance of classifier by comparing the classification results of the SVM based classifier with the Back Propagation Neural network classifier(BPN...
Prediction and identification using wavelet-based recurrent fuzzy neural networks.
Lin, Cheng-Jian; Chin, Cheng-Chung
2004-10-01
This paper presents a wavelet-based recurrent fuzzy neural network (WRFNN) for prediction and identification of nonlinear dynamic systems. The proposed WRFNN model combines the traditional Takagi-Sugeno-Kang (TSK) fuzzy model and the wavelet neural networks (WNN). This paper adopts the nonorthogonal and compactly supported functions as wavelet neural network bases. Temporal relations embedded in the network are caused by adding some feedback connections representing the memory units into the second layer of the feedforward wavelet-based fuzzy neural networks (WFNN). An online learning algorithm, which consists of structure learning and parameter learning, is also presented. The structure learning depends on the degree measure to obtain the number of fuzzy rules and wavelet functions. Meanwhile, the parameter learning is based on the gradient descent method for adjusting the shape of the membership function and the connection weights of WNN. Finally, computer simulations have demonstrated that the proposed WRFNN model requires fewer adjustable parameters and obtains a smaller rms error than other methods. PMID:15503511
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications. PMID:19885963
Estimation of density of mongooses with capture-recapture and distance sampling
Corn, J.L.; Conroy, M.J.
1998-01-01
We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.
Online Direct Density-Ratio Estimation Applied to Inlier-Based Outlier Detection.
du Plessis, Marthinus Christoffel; Shiino, Hiroaki; Sugiyama, Masashi
2015-09-01
Many machine learning problems, such as nonstationarity adaptation, outlier detection, dimensionality reduction, and conditional density estimation, can be effectively solved by using the ratio of probability densities. Since the naive two-step procedure of first estimating the probability densities and then taking their ratio performs poorly, methods to directly estimate the density ratio from two sets of samples without density estimation have been extensively studied recently. However, these methods are batch algorithms that use the whole data set to estimate the density ratio, and they are inefficient in the online setup, where training samples are provided sequentially and solutions are updated incrementally without storing previous samples. In this letter, we propose two online density-ratio estimators based on the adaptive regularization of weight vectors. Through experiments on inlier-based outlier detection, we demonstrate the usefulness of the proposed methods. PMID:26161817
A family of non-parametric density estimation algorithms E. G. TABAK
Tabak, Esteban G.
A family of non-parametric density estimation algorithms E. G. TABAK Courant Institute for density estimation is proposed. The methodology, which builds on the one developed in [17], normalizes depend on a single parameter: all the complexity of arbitrary, possibly convoluted probability densities
Nonparametric estimation of population density for line transect sampling using FOURIER series
Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.
1979-01-01
A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.
Density Matrix Estimation in Quantum Homodyne Yazhen Wang and Chenliang Xu
Wang, Yazhen
Density Matrix Estimation in Quantum Homodyne Tomography Yazhen Wang and Chenliang Xu University, and scientists need to learn the systems from experimental data. As density matrices are usually employed to characterize the quantum states of the systems, this paper investigates estimation of density matrices. We
Estimating low-density snowshoe hare populations using fecal pellet counts
Estimating low-density snowshoe hare populations using fecal pellet counts Dennis L. Murray, James D. Roth, Ethan Ellsworth, Aaron J. Wirsing, and Todd D. Steury Abstract: Snowshoe hare (Lepus of fecal pellet plots for estimating hare populations by correlating pellet densities with estimated hare
On Locally Adaptive Density Estimation Stephan R. Sain and David W. Scott 1
Scott, David W.
On Locally Adaptive Density Estimation Stephan R. Sain and David W. Scott 1 January 8, 1996. Scott is Professor, Department of Statistics, Rice University, POB 1892, Houston, TX 77251 kernel estimators as well as other density estimators see Silverman (1986), Scott (1992), and Wand
How Bandwidth Selection Algorithms Impact Exploratory Data Analysis Using Kernel Density Estimation
Harpole, Jared Kenneth
2013-05-31
Exploratory data analysis (EDA) is important, yet often overlooked in the social and behavioral sciences. Graphical analysis of one's data is central to EDA. A viable method of estimating and graphing the underlying density in EDA is kernel density...
Brown, S.
1996-07-01
This chapter discusses estimating the biomass density of forest vegetation. Data from inventories of tropical Asia and America were used to estimate biomass densities. Efforts to quantify forest disturbance suggest that population density, at subnational scales, can be used as a surrogate index to encompass all the anthropogenic activities (logging, slash-and-burn agriculture, grazing) that lead to degradation of tropical forest biomass density.
Chang, Pao-Chi
2007-01-01
Computerized Medical Imaging and Graphics 31 (2007) 18 Wavelet-based medical image compression; Medical image; Selection of predictor variables; Adaptive arithmetic coding; Multicollinearity problem 1. Introduction Medical images are a special category of images in their char- acteristics and purposes. Medical
Rajaraman, R; Hariharan, G
2014-07-01
In this paper, we have applied an efficient wavelet-based approximation method for solving the Fisher's type and the fractional Fisher's type equations arising in biological sciences. To the best of our knowledge, until now there is no rigorous wavelet solution has been addressed for the Fisher's and fractional Fisher's equations. The highest derivative in the differential equation is expanded into Legendre series; this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the Fisher's equation and the fractional Fisher's equation are converted into a system of algebraic equations. Block-pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24908255
A new algorithm for wavelet-based heart rate variability analysis
García, Constantino A; Vila, Xosé; Márquez, David G
2014-01-01
One of the most promising non-invasive markers of the activity of the autonomic nervous system is Heart Rate Variability (HRV). HRV analysis toolkits often provide spectral analysis techniques using the Fourier transform, which assumes that the heart rate series is stationary. To overcome this issue, the Short Time Fourier Transform is often used (STFT). However, the wavelet transform is thought to be a more suitable tool for analyzing non-stationary signals than the STFT. Given the lack of support for wavelet-based analysis in HRV toolkits, such analysis must be implemented by the researcher. This has made this technique underutilized. This paper presents a new algorithm to perform HRV power spectrum analysis based on the Maximal Overlap Discrete Wavelet Packet Transform (MODWPT). The algorithm calculates the power in any spectral band with a given tolerance for the band's boundaries. The MODWPT decomposition tree is pruned to avoid calculating unnecessary wavelet coefficients, thereby optimizing execution t...
An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report
Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.
1998-11-01
The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.
Wavelet bases on the interval with short support and vanishing moments
NASA Astrophysics Data System (ADS)
Bímová, Daniela; ?erná, Dana; Fin?k, Václav
2012-11-01
Jia and Zhao have recently proposed a construction of a cubic spline wavelet basis on the interval which satisfies homogeneous Dirichlet boundary conditions of the second order. They used the basis for solving fourth order problems and they showed that Galerkin method with this basis has superb convergence. The stiffness matrices for the biharmonic equation defined on a unit square have very small and uniformly bounded condition numbers. In our contribution, we design wavelet bases with the same scaling functions and different wavelets. We show that our basis has the same quantitative properties as the wavelet basis constructed by Jia and Zhao and additionally the wavelets have vanishing moments. It enables to use this wavelet basis in adaptive wavelet methods and non-adaptive sparse grid methods. Furthermore, we even improve the condition numbers of the stiffness matrices by including lower levels.
Corrosion in Reinforced Concrete Panels: Wireless Monitoring and Wavelet-Based Analysis
Qiao, Guofu; Sun, Guodong; Hong, Yi; Liu, Tiejun; Guan, Xinchun
2014-01-01
To realize the efficient data capture and accurate analysis of pitting corrosion of the reinforced concrete (RC) structures, we first design and implement a wireless sensor and network (WSN) to monitor the pitting corrosion of RC panels, and then, we propose a wavelet-based algorithm to analyze the corrosion state with the corrosion data collected by the wireless platform. We design a novel pitting corrosion-detecting mote and a communication protocol such that the monitoring platform can sample the electrochemical emission signals of corrosion process with a configured period, and send these signals to a central computer for the analysis. The proposed algorithm, based on the wavelet domain analysis, returns the energy distribution of the electrochemical emission data, from which close observation and understanding can be further achieved. We also conducted test-bed experiments based on RC panels. The results verify the feasibility and efficiency of the proposed WSN system and algorithms. PMID:24556673
Wavelet-based built-in damage detection and identification for composites
NASA Astrophysics Data System (ADS)
Yan, G.; Zhou, Lily L.; Yuan, F. G.
2005-05-01
In this paper, a wavelet-based built-in damage detection and identification algorithm for carbon fiber reinforced polymer (CFRP) laminates is proposed. Lamb waves propagating in laminates are first modeled analytically using higher-order plate theory and compared them with experimental results in terms of group velocity. Distributed piezoelectric transducers are used to generate and monitor the fundamental ultrasonic Lamb waves in the laminates with narrowband frequencies. A signal processing scheme based on wavelet analysis is applied on the sensor signals to extract the group velocity of the wave propagating in the laminates. Combined with the theoretically computed wave velocity, a genetic algorithms (GA) optimization technique is employed to identify the location and size of the damage. The applicability of this proposed method to detect and size the damage is demonstrated by experimental studies on a composite plate with simulated delamination damages.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Masci, Frank
. DENSITY ESTIMATION FOR STATISTICS AND DATA ANALYSIS B.W. Silverman School of Mathematics University of Bath, UK Table of Contents INTRODUCTION What is density estimation? Density estimates domains and directional data Discussion and bibliography 1. INTROUCTION 1.1. What is density estimation
Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.
Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M
2015-05-01
Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology. PMID:26236833
Simple Form of MMSE Estimator for Super-Gaussian Prior Densities
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-04-01
The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.
Effects of LiDAR point density and landscape context on estimates of urban forest biomass
NASA Astrophysics Data System (ADS)
Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.
2015-03-01
Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.
Techniques and Technology Article Road-Based Surveys for Estimating Wild Turkey Density
Wallace, Mark C.
Techniques and Technology Article Road-Based Surveys for Estimating Wild Turkey Density-transectbased distance sampling has been used to estimate density of several wild bird species including wild turkeys (Meleagris gallopavo). We used inflatable turkey decoys during autumn (AugNov) and winter (DecMar) 2003
Estimating snowshoe hare population density from pellet plots: a further evaluation
Krebs, Charles J.
Estimating snowshoe hare population density from pellet plots: a further evaluation Charles J fecal pellets of snowshoe hares (Lepus americanus) once a year in 10 areas in the southwestern Yukon, and we correlated these counts with estimates of absolute hare density obtained by intensive mark
Bayesian Nonparametric Functional Data Analysis Through Density Estimation.
Rodríguez, Abel; Dunson, David B; Gelfand, Alan E
2009-01-01
In many modern experimental settings, observations are obtained in the form of functions, and interest focuses on inferences on a collection of such functions. We propose a hierarchical model that allows us to simultaneously estimate multiple curves nonparametrically by using dependent Dirichlet Process mixtures of Gaussians to characterize the joint distribution of predictors and outcomes. Function estimates are then induced through the conditional distribution of the outcome given the predictors. The resulting approach allows for flexible estimation and clustering, while borrowing information across curves. We also show that the function estimates we obtain are consistent on the space of integrable functions. As an illustration, we consider an application to the analysis of Conductivity and Temperature at Depth data in the north Atlantic. PMID:19262739
Estimation of current density distribution under electrodes for external defibrillation
Krasteva, Vessela Tz; Papazov, Sava P
2002-01-01
Background Transthoracic defibrillation is the most common life-saving technique for the restoration of the heart rhythm of cardiac arrest victims. The procedure requires adequate application of large electrodes on the patient chest, to ensure low-resistance electrical contact. The current density distribution under the electrodes is non-uniform, leading to muscle contraction and pain, or risks of burning. The recent introduction of automatic external defibrillators and even wearable defibrillators, presents new demanding requirements for the structure of electrodes. Method and Results Using the pseudo-elliptic differential equation of Laplace type with appropriate boundary conditions and applying finite element method modeling, electrodes of various shapes and structure were studied. The non-uniformity of the current density distribution was shown to be moderately improved by adding a low resistivity layer between the metal and tissue and by a ring around the electrode perimeter. The inclusion of openings in long-term wearable electrodes additionally disturbs the current density profile. However, a number of small-size perforations may result in acceptable current density distribution. Conclusion The current density distribution non-uniformity of circular electrodes is about 30% less than that of square-shaped electrodes. The use of an interface layer of intermediate resistivity, comparable to that of the underlying tissues, and a high-resistivity perimeter ring, can further improve the distribution. The inclusion of skin aeration openings disturbs the current paths, but an appropriate selection of number and size provides a reasonable compromise. PMID:12537593
Samb, Rawane
2012-01-01
This manuscript is a supplemental document providing the omitted material for our paper entitled "Nonparametric kernel estimation of the probability density function of regression errors using estimated residuals" [arXiv:1010.0439]. The paper is submitted to Journal of Nonparametric Statistics.
Horowitz, Roberto
Traffic Density Estimation with the Cell Transmission Model1 Laura Muñoz, Xiaotian Sun, Roberto of traffic densities at unmonitored locations along a highway. The SMM is a hybrid system that switches among, the observability and controllability properties of the SMM modes have been determined. Both the SMM and a density
Thomas, Len
Estimating cetacean population density using fixed passive acoustic sensors: An example/density of cetacean populations using data from a set of fixed passive acoustic sensors. The methods convert the number of detected acoustic cues into animal density by accounting for i the probability of detecting
Links between PPCA and subspace methods for complete Gaussian density estimation.
Wang, Chong; Wang, Wenyuan
2006-05-01
High-dimensional density estimation is a fundamental problem in pattern recognition and machine learning areas. In this letter, we show that, for complete high-dimensional Gaussian density estimation, two widely used methods, probabilistic principal component analysis and a typical subspace method using eigenspace decomposition, actually give the same results. Additionally, we present a unified view from the aspect of robust estimation of the covariance matrix. PMID:16722180
Gossip-based density estimation in dynamic heterogeneous sensor networks
Langendoen, Koen
service that can be applied in clustering schemes, node redeployment, and sleep mode scheduling. Energy to different density of sensor types over areas of interest [4]. Another advantage of using heterogeneous nodes to provide stable sensing capability by rescheduling sleep mode [7] at run time. It can also provide useful
The Root-Unroot Algorithm for Density Estimation as Implemented via Wavelet Block Thresholding
Brown, Lawrence D.
The Root-Unroot Algorithm for Density Estimation as Implemented via Wavelet Block Thresholding, and by then applying a suitable form of root transformation to the binned data counts. In principle many common block thresholding estimator in this paper. Finally, the estimated regression function is un-rooted
Estimating insect flight densities from attractive trap catches and flight height distributions.
Byers, John A
2012-05-01
Methods and equations have not been developed previously to estimate insect flight densities, a key factor in decisions regarding trap and lure deployment in programs of monitoring, mass trapping, and mating disruption with semiochemicals. An equation to estimate densities of flying insects per hectare is presented that uses the standard deviation (SD) of the vertical flight distribution, trapping time, the trap's spherical effective radius (ER), catch at the mean flight height (as estimated from a best-fitting normal distribution with SD), and an estimated average flight speed. Data from previous reports were used to estimate flight densities with the equations. The same equations can use traps with pheromone lures or attractive colors with a measured effective attraction radius (EAR) instead of the ER. In practice, EAR is more useful than ER for flight density calculations since attractive traps catch higher numbers of insects and thus can measure lower populations more readily. Computer simulations in three dimensions with varying numbers of insects (density) and varying EAR were used to validate the equations for density estimates of insects in the field. Few studies have provided data to obtain EAR, SD, speed, and trapping time to estimate flight densities per hectare. However, the necessary parameters can be measured more precisely in future studies. PMID:22527056
The importance of spatial models for estimating the strength of density dependence.
Thorson, James T; Skaug, Hans J; Kristensen, Kasper; Shelton, Andrew O; Ward, Eric J; Harms, John H; Benante, James A
2015-05-01
Identifying the existence and magnitude of density dependence is one of the oldest concerns in ecology. Ecologists have aimed to estimate density dependence in population and community data by fitting a simple autoregressive (Gompertz) model for density dependence to time series of abundance for an entire population. However, it is increasingly recognized that spatial heterogeneity in population densities has implications for population and community dynamics. We therefore adapt the Gompertz model to approximate, local densities over continuous space instead of population-wide abundance, and allow productivity to vary spatially using Gaussian random fields. We then show that the conventional (nonspatial) Gompertz model can result in biased estimates of density dependence (e.g., identifying oscillatory dynamics when not present) if densities vary spatially. By contrast, the spatial Gompertz model provides accurate and precise estimates of density dependence for a variety of simulation scenarios and data availabilities. These results are corroborated when comparing spatial and nonspatial models for data from 10 years and -100 sampling stations for three long-lived rockfishes (Sebastes spp.) off the California, USA coast. In this case, the nonspatial model estimates implausible oscillatory dynamics on an annual time scale, while the spatial model estimates strong autocorrelation and is supported by model selection tools. We conclude by discussing the importance of improved data archiving techniques, so that spatial models can be used to reexamine classic questions regarding the existence and magnitude of density. dependence in wild populations. PMID:26236835
Estimated global nitrogen deposition using NO2 column density
Lu, Xuehe; Jiang, Hong; Zhang, Xiuying; Liu, Jinxun; Zhang, Zhen; Jin, Jiaxin; Wang, Ying; Xu, Jianhui; Cheng, Miaomiao
2013-01-01
Global nitrogen deposition has increased over the past 100 years. Monitoring and simulation studies of nitrogen deposition have evaluated nitrogen deposition at both the global and regional scale. With the development of remote-sensing instruments, tropospheric NO2 column density retrieved from Global Ozone Monitoring Experiment (GOME) and Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) sensors now provides us with a new opportunity to understand changes in reactive nitrogen in the atmosphere. The concentration of NO2 in the atmosphere has a significant effect on atmospheric nitrogen deposition. According to the general nitrogen deposition calculation method, we use the principal component regression method to evaluate global nitrogen deposition based on global NO2 column density and meteorological data. From the accuracy of the simulation, about 70% of the land area of the Earth passed a significance test of regression. In addition, NO2 column density has a significant influence on regression results over 44% of global land. The simulated results show that global average nitrogen deposition was 0.34 g m?2 yr?1 from 1996 to 2009 and is increasing at about 1% per year. Our simulated results show that China, Europe, and the USA are the three hotspots of nitrogen deposition according to previous research findings. In this study, Southern Asia was found to be another hotspot of nitrogen deposition (about 1.58 g m?2 yr?1 and maintaining a high growth rate). As nitrogen deposition increases, the number of regions threatened by high nitrogen deposits is also increasing. With N emissions continuing to increase in the future, areas whose ecosystem is affected by high level nitrogen deposition will increase.
Kim, Byung S; Yoo, Sun K
2007-09-01
The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824
Performance evaluation of wavelet-based face verification on a PDA recorded database
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2006-05-01
The rise of international terrorism and the rapid increase in fraud and identity theft has added urgency to the task of developing biometric-based person identification as a reliable alternative to conventional authentication methods. Human Identification based on face images is a tough challenge in comparison to identification based on fingerprints or Iris recognition. Yet, due to its unobtrusive nature, face recognition is the preferred method of identification for security related applications. The success of such systems will depend on the support of massive infrastructures. Current mobile communication devices (3G smart phones) and PDA's are equipped with a camera which can capture both still and streaming video clips and a touch sensitive display panel. Beside convenience, such devices provide an adequate secure infrastructure for sensitive & financial transactions, by protecting against fraud and repudiation while ensuring accountability. Biometric authentication systems for mobile devices would have obvious advantages in conflict scenarios when communication from beyond enemy lines is essential to save soldier and civilian life. In areas of conflict or disaster the luxury of fixed infrastructure is not available or destroyed. In this paper, we present a wavelet-based face verification scheme that have been specifically designed and implemented on a currently available PDA. We shall report on its performance on the benchmark audio-visual BANCA database and on a newly developed PDA recorded audio-visual database that take include indoor and outdoor recordings.
Seshadrinath, Jeevanand; Singh, Bhim; Panigrahi, Bijaya Ketan
2014-05-01
Interturn fault diagnosis of induction machines has been discussed using various neural network-based techniques. The main challenge in such methods is the computational complexity due to the huge size of the network, and in pruning a large number of parameters. In this paper, a nearly shift insensitive complex wavelet-based probabilistic neural network (PNN) model, which has only a single parameter to be optimized, is proposed for interturn fault detection. The algorithm constitutes two parts and runs in an iterative way. In the first part, the PNN structure determination has been discussed, which finds out the optimum size of the network using an orthogonal least squares regression algorithm, thereby reducing its size. In the second part, a Bayesian classifier fusion has been recommended as an effective solution for deciding the machine condition. The testing accuracy, sensitivity, and specificity values are highest for the product rule-based fusion scheme, which is obtained under load, supply, and frequency variations. The point of overfitting of PNN is determined, which reduces the size, without compromising the performance. Moreover, a comparative evaluation with traditional discrete wavelet transform-based method is demonstrated for performance evaluation and to appreciate the obtained results. PMID:24808044
Wavelet-based detection of abrupt changes in natural frequencies of time-variant systems
NASA Astrophysics Data System (ADS)
Dziedziech, K.; Staszewski, W. J.; Basu, B.; Uhl, T.
2015-12-01
Detection of abrupt changes in natural frequencies from vibration responses of time-variant systems is a challenging task due to the complex nature of physics involved. It is clear that the problem needs to be analysed in the combined time-frequency domain. The paper proposes an application of the input-output wavelet-based Frequency Response Function for this analysis. The major focus and challenge relate to ridge extraction of the above time-frequency characteristics. It is well known that classical ridge extraction procedures lead to ridges that are smooth. However, this property is not desired when abrupt changes in the dynamics are considered. The methods presented in the paper are illustrated using simulated and experimental multi-degree-of-freedom systems. The results are compared with the classical Frequency Response Function and with the output only analysis based on the wavelet auto-power response spectrum. The results show that the proposed method captures correctly the dynamics of the analysed time-variant systems.
Ibaida, Ayman; Khalil, Ibrahim
2013-12-01
With the growing number of aging population and a significant portion of that suffering from cardiac diseases, it is conceivable that remote ECG patient monitoring systems are expected to be widely used as point-of-care (PoC) applications in hospitals around the world. Therefore, huge amount of ECG signal collected by body sensor networks from remote patients at homes will be transmitted along with other physiological readings such as blood pressure, temperature, glucose level, etc., and diagnosed by those remote patient monitoring systems. It is utterly important that patient confidentiality is protected while data are being transmitted over the public network as well as when they are stored in hospital servers used by remote monitoring systems. In this paper, a wavelet-based steganography technique has been introduced which combines encryption and scrambling technique to protect patient confidential data. The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest. To evaluate the effectiveness of the proposed technique on the ECG signal, two distortion measurement metrics have been used: the percentage residual difference and the wavelet weighted PRD. It is found that the proposed technique provides high-security protection for patients data with low (less than 1%) distortion and ECG data remain diagnosable after watermarking (i.e., hiding patient confidential data) and as well as after watermarks (i.e., hidden data) are removed from the watermarked data. PMID:23708767
Radiation dose reduction in digital radiography using wavelet-based image processing methods
NASA Astrophysics Data System (ADS)
Watanabe, Haruyuki; Tsai, Du-Yih; Lee, Yongbum; Matsuyama, Eri; Kojima, Katsuyuki
2011-03-01
In this paper, we investigate the effect of the use of wavelet transform for image processing on radiation dose reduction in computed radiography (CR), by measuring various physical characteristics of the wavelet-transformed images. Moreover, we propose a wavelet-based method for offering a possibility to reduce radiation dose while maintaining a clinically acceptable image quality. The proposed method integrates the advantages of a previously proposed technique, i.e., sigmoid-type transfer curve for wavelet coefficient weighting adjustment technique, as well as a wavelet soft-thresholding technique. The former can improve contrast and spatial resolution of CR images, the latter is able to improve the performance of image noise. In the investigation of physical characteristics, modulation transfer function, noise power spectrum, and contrast-to-noise ratio of CR images processed by the proposed method and other different methods were measured and compared. Furthermore, visual evaluation was performed using Scheffe's pair comparison method. Experimental results showed that the proposed method could improve overall image quality as compared to other methods. Our visual evaluation showed that an approximately 40% reduction in exposure dose might be achieved in hip joint radiography by using the proposed method.
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Technical Reports Server (NTRS)
Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)
2002-01-01
This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
RADIATION PRESSURE DETECTION AND DENSITY ESTIMATE FOR 2011 MD
Micheli, Marco; Tholen, David J.; Elliott, Garrett T. E-mail: tholen@ifa.hawaii.edu
2014-06-10
We present our astrometric observations of the small near-Earth object 2011 MD (H ? 28.0), obtained after its very close fly-by to Earth in 2011 June. Our set of observations extends the observational arc to 73 days, and, together with the published astrometry obtained around the Earth fly-by, allows a direct detection of the effect of radiation pressure on the object, with a confidence of 5?. The detection can be used to put constraints on the density of the object, pointing to either an unexpectedly low value of ?=(640±330)kg m{sup ?3} (68% confidence interval) if we assume a typical probability distribution for the unknown albedo, or to an unusually high reflectivity of its surface. This result may have important implications both in terms of impact hazard from small objects and in light of a possible retrieval of this target.
Down the Rabbit Hole: Robust Proximity Search and Density Estimation in Sublinear Space
Har-Peled, Sariel
Down the Rabbit Hole: Robust Proximity Search and Density Estimation in Sublinear Space Sariel Har compression [GG91], computational statistics [DW82], clustering [DHS01], data mining, learning, and many
MAP Estimation of Continuous Density HMM : Theory and Applications Jean-Luc Gauvainy
maximum a posteriori estimation of continuous density hid- den Markov models (CDHMM).The classical Markov model, the lack of a sufficient statistic of fixed dimension is due to the underlying hid- den pro
A comparison of 2 techniques for estimating deer density
Storm, G.L.; Cottam, D.F.; Yahner, R.H.; Nichols, J.D.
1977-01-01
We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.
The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition
Hays, Graeme
The energy density of jellyfish: Estimates from bomb-calorimetry and proximate-composition Thomas K scyphozoan jellyfish (Cyanea capillata, Rhizostoma octopus and Chrysaora hysoscella). First, bomb). These proximate data were subsequently converted to energy densities. The two techniques (bomb- calorimetry
In-Shell Bulk Density as an Estimator of Farmers Stock Grade Factors
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this research was to determine whether or not bulk density can be used to accurately estimate farmer stock grade factors such as total sound mature kernels and other kernels. Physical properties including bulk density, pod size and kernel size distributions are measured as part of t...
A PATIENT-SPECIFIC CORONARY DENSITY ESTIMATE R. Shahzad 1,2
van Vliet, Lucas J.
-enhanced native CT scan and a high resolution contrast-enhanced CTA scan. The native scan is used for calcium A reliable density estimate for the position of the coronary arteries in Computed Tomography (CT) data in CT and CT angiography (CTA). The proposed method constructs a patient- specific coronary density
L1-consistent estimation of the density of residuals in random design regression
Devroye, Luc
: Density estimation, L1 error, residuals, nonparametric regression, universal consistency. 1. Introduction the density of the error distribution in nonparametric regression models has been dealt with by several regression errors. In the heteroscedastic nonparametric regression model, where the Yi's have different
Uniform convergence of convolution estimators for the response density in nonparametric regression
Wefelmeyer, Wolfgang
von Mises statistic, local U-statistic, local polynomial smoother, monotone regression function densities, or more generally by a local von Mises statistic. If the regression function has a nowhereUniform convergence of convolution estimators for the response density in nonparametric regression
Estimating beaked whale density from single hydrophones by means of propagation modeling
Thomas, Len
Estimating beaked whale density from single hydrophones by means of propagation modeling Elizabeth Warfare Center) #12;Outline Overview of DECAF project Blainville's beaked whales Study area and available of DECAF project Blainville's beaked whales Study area and available acoustic data How do we estimate
Technology Transfer Automated Retrieval System (TEKTRAN)
Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...
Multiscale Density Estimation R. M. Willett, Student Member, IEEE, and R. D. Nowak, Member, IEEE
Nowak, Robert
1 Multiscale Density Estimation R. M. Willett, Student Member, IEEE, and R. D. Nowak, Member, IEEE additional advantages: estimates are guaranteed to be positive, Corresponding author: R. Willett, TX 77251-1892 USA (e-mail: willett@ece.rice.edu, phone: 713 348 3230, fax: 713 348 6196). R. Willett
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
Wavelet-based compression of medical images: filter-bank selection and evaluation.
Saffor, A; bin Ramli, A R; Ng, K H
2003-06-01
Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184
Wavelet-based clustering of resting state MRI data in the rat.
Medda, Alessio; Hoffmann, Lukas; Magnuson, Matthew; Thompson, Garth; Pan, Wen-Ju; Keilholz, Shella
2016-01-01
While functional connectivity has typically been calculated over the entire length of the scan (5-10min), interest has been growing in dynamic analysis methods that can detect changes in connectivity on the order of cognitive processes (seconds). Previous work with sliding window correlation has shown that changes in functional connectivity can be observed on these time scales in the awake human and in anesthetized animals. This exciting advance creates a need for improved approaches to characterize dynamic functional networks in the brain. Previous studies were performed using sliding window analysis on regions of interest defined based on anatomy or obtained from traditional steady-state analysis methods. The parcellation of the brain may therefore be suboptimal, and the characteristics of the time-varying connectivity between regions are dependent upon the length of the sliding window chosen. This manuscript describes an algorithm based on wavelet decomposition that allows data-driven clustering of voxels into functional regions based on temporal and spectral properties. Previous work has shown that different networks have characteristic frequency fingerprints, and the use of wavelets ensures that both the frequency and the timing of the BOLD fluctuations are considered during the clustering process. The method was applied to resting state data acquired from anesthetized rats, and the resulting clusters agreed well with known anatomical areas. Clusters were highly reproducible across subjects. Wavelet cross-correlation values between clusters from a single scan were significantly higher than the values from randomly matched clusters that shared no temporal information, indicating that wavelet-based analysis is sensitive to the relationship between areas. PMID:26481903
Ku, Bon Ki; Evans, Douglas E.
2015-01-01
For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard’s estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles. PMID:26526560
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. PMID:21682386
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Estimation of tiger densities in India using photographic captures and recaptures
Karanth, U.; Nichols, J.D.
1998-01-01
Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
An analytic model of toroidal half-wave oscillations: Implication on plasma density estimates
NASA Astrophysics Data System (ADS)
Bulusu, Jayashree; Sinha, A. K.; Vichare, Geeta
2015-06-01
The developed analytic model for toroidal oscillations under infinitely conducting ionosphere ("Rigid-end") has been extended to "Free-end" case when the conjugate ionospheres are infinitely resistive. The present direct analytic model (DAM) is the only analytic model that provides the field line structures of electric and magnetic field oscillations associated with the "Free-end" toroidal wave for generalized plasma distribution characterized by the power law ? = ?o(ro/r)m, where m is the density index and r is the geocentric distance to the position of interest on the field line. This is important because different regions in the magnetosphere are characterized by different m. Significant improvement over standard WKB solution and an excellent agreement with the numerical exact solution (NES) affirms validity and advancement of DAM. In addition, we estimate the equatorial ion number density (assuming H+ atom as the only species) using DAM, NES, and standard WKB for Rigid-end as well as Free-end case and illustrate their respective implications in computing ion number density. It is seen that WKB method overestimates the equatorial ion density under Rigid-end condition and underestimates the same under Free-end condition. The density estimates through DAM are far more accurate than those computed through WKB. The earlier analytic estimates of ion number density were restricted to m = 6, whereas DAM can account for generalized m while reproducing the density for m = 6 as envisaged by earlier models.
Variability of dental cone beam CT grey values for density estimations
Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K
2013-01-01
Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm?3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537
Real-time wavelet-based inline banknote-in-bundle counting for cut-and-bundle machines
NASA Astrophysics Data System (ADS)
Petker, Denis; Lohweg, Volker; Gillich, Eugen; Türke, Thomas; Willeke, Harald; Lochmüller, Jens; Schaede, Johannes
2011-03-01
Automatic banknote sheet cut-and-bundle machines are widely used within the scope of banknote production. Beside the cutting-and-bundling, which is a mature technology, image-processing-based quality inspection for this type of machine is attractive. We present in this work a new real-time Touchless Counting and perspective cutting blade quality insurance system, based on a Color-CCD-Camera and a dual-core Computer, for cut-and-bundle applications in banknote production. The system, which applies Wavelet-based multi-scale filtering is able to count banknotes inside a 100-bundle within 200-300 ms depending on the window size.
Multiscale seismic characterization of marine sediments by using a wavelet-based approach
NASA Astrophysics Data System (ADS)
Ker, Stephan; Le Gonidec, Yves; Gibert, Dominique
2015-04-01
We propose a wavelet-based method to characterize acoustic impedance discontinuities from a multiscale analysis of reflected seismic waves. This method is developed in the framework of the wavelet response (WR) where dilated wavelets are used to sound a complex seismic reflector defined by a multiscale impedance structure. In the context of seismic imaging, we use the WR as a multiscale seismic attributes, in particular ridge functions which contain most of the information that quantifies the complex geometry of the reflector. We extend this approach by considering its application to analyse seismic data acquired with broadband but frequency limited source signals. The band-pass filter related to such actual sources distort the WR: in order to remove these effects, we develop an original processing based on fractional derivatives of Lévy alpha-stable distributions in the formalism of the continuous wavelet transform (CWT). We demonstrate that the CWT of a seismic trace involving such a finite frequency bandwidth can be made equivalent to the CWT of the impulse response of the subsurface and is defined for a reduced range of dilations, controlled by the seismic source signal. In this dilation range, the multiscale seismic attributes are corrected from distortions and we can thus merge multiresolution seismic sources to increase the frequency range of the mutliscale analysis. As a first demonstration, we perform the source-correction with the high and very high resolution seismic sources of the SYSIF deep-towed seismic device and we show that both can now be perfectly merged into an equivalent seismic source with an improved frequency bandwidth (220-2200 Hz). Such multiresolution seismic data fusion allows reconstructing the acoustic impedance of the subseabed based on the inverse wavelet transform properties extended to the source-corrected WR. We illustrate the potential of this approach with deep-water seismic data acquired during the ERIG3D cruise and we compare the results with the multiscale analysis performed on synthetic seismic data based on ground truth measurements.
LSTA, Rawane Samb
2010-01-01
This thesis deals with the nonparametric estimation of density f of the regression error term E of the model Y=m(X)+E, assuming its independence with the covariate X. The difficulty linked to this study is the fact that the regression error E is not observed. In a such setup, it would be unwise, for estimating f, to use a conditional approach based upon the probability distribution function of Y given X. Indeed, this approach is affected by the curse of dimensionality, so that the resulting estimator of the residual term E would have considerably a slow rate of convergence if the dimension of X is very high. Two approaches are proposed in this thesis to avoid the curse of dimensionality. The first approach uses the estimated residuals, while the second integrates a nonparametric conditional density estimator of Y given X. If proceeding so can circumvent the curse of dimensionality, a challenging issue is to evaluate the impact of the estimated residuals on the final estimator of the density f. We will also at...
Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance
Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.
2014-01-01
Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI?=?193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information needed to inform parameters with high precision. PMID:25350557
Kun-Rodrigues, Célia; Salmona, Jordi; Besolo, Aubin; Rasolondraibe, Emmanuel; Rabarivola, Clément; Marques, Tiago A; Chikhi, Lounès
2014-06-01
Propithecus coquereli is one of the last sifaka species for which no reliable and extensive density estimates are yet available. Despite its endangered conservation status [IUCN, 2012] and recognition as a flagship species of the northwestern dry forests of Madagascar, its population in its last main refugium, the Ankarafantsika National Park (ANP), is still poorly known. Using line transect distance sampling surveys we estimated population density and abundance in the ANP. Furthermore, we investigated the effects of road, forest edge, river proximity and group size on sighting frequencies, and density estimates. We provide here the first population density estimates throughout the ANP. We found that density varied greatly among surveyed sites (from 5 to ?100?ind/km2) which could result from significant (negative) effects of road, and forest edge, and/or a (positive) effect of river proximity. Our results also suggest that the population size may be ?47,000 individuals in the ANP, hinting that the population likely underwent a strong decline in some parts of the Park in recent decades, possibly caused by habitat loss from fires and charcoal production and by poaching. We suggest community-based conservation actions for the largest remaining population of Coquerel's sifaka which will (i) maintain forest connectivity; (ii) implement alternatives to deforestation through charcoal production, logging, and grass fires; (iii) reduce poaching; and (iv) enable long-term monitoring of the population in collaboration with local authorities and researchers. PMID:24443250
Marques, Tiago A; Thomas, Len; Ward, Jessica; DiMarzio, Nancy; Tyack, Peter L
2009-04-01
Methods are developed for estimating the size/density of cetacean populations using data from a set of fixed passive acoustic sensors. The methods convert the number of detected acoustic cues into animal density by accounting for (i) the probability of detecting cues, (ii) the rate at which animals produce cues, and (iii) the proportion of false positive detections. Additional information is often required for estimation of these quantities, for example, from an acoustic tag applied to a sample of animals. Methods are illustrated with a case study: estimation of Blainville's beaked whale density over a 6 day period in spring 2005, using an 82 hydrophone wide-baseline array located in the Tongue of the Ocean, Bahamas. To estimate the required quantities, additional data are used from digital acoustic tags, attached to five whales over 21 deep dives, where cues recorded on some of the dives are associated with those received on the fixed hydrophones. Estimated density was 25.3 or 22.5 animals/1000 km(2), depending on assumptions about false positive detections, with 95% confidence intervals 17.3-36.9 and 15.4-32.9. These methods are potentially applicable to a wide variety of marine and terrestrial species that are hard to survey using conventional visual methods. PMID:19354374
A hierarchical model for estimating density in camera-trap studies
Royle, J. Andrew; Nichols, J.D.; Karanth, K.U.; Gopalaswamy, A.M.
2009-01-01
1. Estimating animal density using capture?recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping. 2. We develop a spatial capture?recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps. 3. We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation. 4. The model is applied to photographic capture?recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14?3 animals per 100 km2 during 2004. 5. Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential 'holes' in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based 'captures' of individual animals.
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
Hierarchical models for estimating density from DNA mark-recapture studies
Gardner, B.; Royle, J. Andrew; Wegan, M.T.
2009-01-01
Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.
A Statistical Analysis for Estimating Fish Number Density with the Use of a Multibeam Echosounder
NASA Astrophysics Data System (ADS)
Schroth-Miller, Madeline L.
Fish number density can be estimated from the normalized second moment of acoustic backscatter intensity [Denbigh et al., J. Acoust. Soc. Am. 90, 457-469 (1991)]. This method assumes that the distribution of fish scattering amplitudes is known and that the fish are randomly distributed following a Poisson volume distribution within regions of constant density. It is most useful at low fish densities, relative to the resolution of the acoustic device being used, since the estimators quickly become noisy as the number of fish per resolution cell increases. New models that include noise contributions are considered. The methods were applied to an acoustic assessment of juvenile Atlantic Bluefin Tuna, Thunnus thynnus. The data were collected using a 400 kHz multibeam echo sounder during the summer months of 2009 in Cape Cod, MA. Due to the high resolution of the multibeam system used, the large size (approx. 1.5 m) of the tuna, and the spacing of the fish in the school, we expect there to be low fish densities relative to the resolution of the multibeam system. Results of the fish number density based on the normalized second moment of acoustic intensity are compared to fish packing density estimated using aerial imagery that was collected simultaneously.
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Estimation of density-dependent mortality of juvenile bivalves in the Wadden Sea.
Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap
2014-01-01
We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293
Estimation of Density-Dependent Mortality of Juvenile Bivalves in the Wadden Sea
Andresen, Henrike; Strasser, Matthias; van der Meer, Jaap
2014-01-01
We investigated density-dependent mortality within the early months of life of the bivalves Macoma balthica (Baltic tellin) and Cerastoderma edule (common cockle) in the Wadden Sea. Mortality is thought to be density-dependent in juvenile bivalves, because there is no proportional relationship between the size of the reproductive adult stocks and the numbers of recruits for both species. It is not known however, when exactly density dependence in the pre-recruitment phase occurs and how prevalent it is. The magnitude of recruitment determines year class strength in bivalves. Thus, understanding pre-recruit mortality will improve the understanding of population dynamics. We analyzed count data from three years of temporal sampling during the first months after bivalve settlement at ten transects in the Sylt-Rømø-Bay in the northern German Wadden Sea. Analyses of density dependence are sensitive to bias through measurement error. Measurement error was estimated by bootstrapping, and residual deviances were adjusted by adding process error. With simulations the effect of these two types of error on the estimate of the density-dependent mortality coefficient was investigated. In three out of eight time intervals density dependence was detected for M. balthica, and in zero out of six time intervals for C. edule. Biological or environmental stochastic processes dominated over density dependence at the investigated scale. PMID:25105293
NASA Astrophysics Data System (ADS)
Semler, Lindsay; Dettori, Lucia
The research presented in this article is aimed at developing an automated imaging system for classification of tissues in medical images obtained from Computed Tomography (CT) scans. The article focuses on using multi-resolution texture analysis, specifically: the Haar wavelet, Daubechies wavelet, Coiflet wavelet, and the ridgelet. The algorithm consists of two steps: automatic extraction of the most discriminative texture features of regions of interest and creation of a classifier that automatically identifies the various tissues. The classification step is implemented using a cross-validation Classification and Regression Tree approach. A comparison of wavelet-based and ridgelet-based algorithms is presented. Tests on a large set of chest and abdomen CT images indicate that, among the three wavelet-based algorithms, the one using texture features derived from the Haar wavelet transform clearly outperforms the one based on Daubechies and Coiflet transform. The tests also show that the ridgelet-based algorithm is significantly more effective and that texture features based on the ridgelet transform are better suited for texture classification in CT medical images.
USING AERIAL HYPERSPECTRAL REMOTE SENSING IMAGERY TO ESTIMATE CORN PLANT STAND DENSITY
Technology Transfer Automated Retrieval System (TEKTRAN)
Since corn plant stand density is important for optimizing crop yield, several researchers have recently developed ground-based systems for automatic measurement of this crop growth parameter. Our objective was to use data from such a system to assess the potential for estimation of corn plant stan...
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrologic and morphological properties of claypan landscapes cause variability in soybean root and shoot biomass. This study was conducted to develop predictive models of soybean root length density distribution (RLDd) using direct measurements and sensor based estimators of claypan morphology. A c...
Estimating the effect of Earth elasticity and variable water density on tsunami speeds
Tsai, Victor C.
Estimating the effect of Earth elasticity and variable water density on tsunami speeds Victor C; revised 25 December 2012; accepted 7 January 2013; published 13 February 2013. [1] The speed of tsunami comparisons of tsunami arrival times from the 11 March 2011 tsunami suggest, however, that the standard
A hybrid approach to crowd density estimation using statistical leaning and texture classification
NASA Astrophysics Data System (ADS)
Li, Yin; Zhou, Bowen
2013-12-01
Crowd density estimation is a hot topic in computer vision community. Established algorithms for crowd density estimation mainly focus on moving crowds, employing background modeling to obtain crowd blobs. However, people's motion is not obvious in most occasions such as the waiting hall in the airport or the lobby in the railway station. Moreover, conventional algorithms for crowd density estimation cannot yield desirable results for all levels of crowding due to occlusion and clutter. We propose a hybrid method to address the aforementioned problems. First, statistical learning is introduced for background subtraction, which comprises a training phase and a test phase. The crowd images are grided into small blocks which denote foreground or background. Then HOG features are extracted and are fed into a binary SVM for each block. Hence, crowd blobs can be obtained by the classification results of the trained classifier. Second, the crowd images are treated as texture images. Therefore, the estimation problem can be formulated as texture classification. The density level can be derived according to the classification results. We validate the proposed algorithm on some real scenarios where the crowd motion is not so obvious. Experimental results demonstrate that our approach can obtain the foreground crowd blobs accurately and work well for different levels of crowding.
Hathorn, Bryan C.
Estimation of Vibrational Frequencies and Vibrational Densities of States in Isotopically Substituted Nonlinear Triatomic Molecules B. C. Hathorn and R. A. Marcus*,§ Computer Science and Mathematics for obtaining the unknown vibration frequencies of the many asymmetric isotopomers of a molecule from those
BINOMIAL SAMPLING TO ESTIMATE CITRUS RUST MITE (ACARI: ERIOPHYIDAE) DENSITIES ON ORANGE FRUIT
Technology Transfer Automated Retrieval System (TEKTRAN)
Binomial sampling based on the proportion of samples infested was investigated as a method for estimating mean densities of citrus rust mites, Phyllocoptruta oleivora (Ashmead) and Aculops pelekassi (Keifer), on oranges. Data for the investigation were obtained by counting the number of motile mites...
Schick, Anton
statistic, local U-statistic, local polynomial smoother, monotone regression function. Anton Schick in nonparametric regression Anton Schick and Wolfgang Wefelmeyer Abstract. Consider a nonparametric regression of two kernel estimators for these densities, or more generally by a local von Mises statistic
Estimating Densities of the Pest Halotydeus destructor (Acari: Penthaleidae) in Canola.
Arthur, Aston L; Hoffmann, Ary A; Umina, Paul A
2014-12-01
Development of sampling techniques to effectively estimate invertebrate densities in the field is essential for effective implementation of pest control programs, particularly when making informed spray decisions around economic thresholds. In this article, we investigated the influence of several factors to devise a sampling strategy to estimate Halotydeus destructor Tucker densities in a canola paddock. Direct visual counts were found to be the most suitable approach for estimating mite numbers, with higher densities detected than the vacuum sampling method. Visual assessments were impacted by the operator, sampling date, and time of day. However, with the exception of operator (more experienced operator detected higher numbers of mites), no obvious trends were detected. No patterns were found between H. destructor numbers and ambient temperature, relative humidity, wind speed, cloud cover, or soil surface conditions, indicating that these factors may not be of high importance when sampling mites during autumn and winter months. We show further support for an aggregated distribution of H. destructor within paddocks, indicating that a stratified random sampling program is likely to be most appropriate. Together, these findings provide important guidelines for Australian growers around the ability to effectively and accurately estimate H. destructor densities. PMID:26470087
Adjusted KNN Model in Estimating User Density in Small Areas with Poor Signal Strength
Greenberg, Albert
Localized user density estimation is fundamental in many fields such as urban planning, traffic engineering technologies. For example, tower zoning and permitting regulations, limited licensed wireless spectrum, radio could cover an area with a radius of a few tens of kilometers, but a smallcell, by design, only covers
NASA Astrophysics Data System (ADS)
Rastigejev, Y.; Semakin, A. N.
2013-12-01
Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical simulation of several benchmark problems including simulation of traveling three-dimensional reactive and inert transpacific pollution plumes. It was shown earlier that conventionally used global CTMs implemented for stationary grids are incapable of reproducing these plumes dynamics due to excessive numerical diffusion cased by limitations in the grid resolution. It has been shown that WAMR algorithm allows us to use one-two orders finer grids than static grid techniques in the region of fine spatial scales without significantly increasing CPU time. Therefore the developed WAMR method has significant advantages over conventional fixed-resolution computational techniques in terms of accuracy and/or computational cost and allows to simulate accurately important multi-scale chemical transport problems which can not be simulated with standard static grid techniques currently utilized by the majority of global atmospheric chemistry models. This work is supported by a grant from National Science Foundation under Award No. HRD-1036563.
Unbiased Estimate of Dark Energy Density from Type Ia Supernova Data
NASA Astrophysics Data System (ADS)
Wang, Yun; Lovelace, Geoffrey
2001-12-01
Type Ia supernovae (SNe Ia) are currently the best probes of the dark energy in the universe. To constrain the nature of dark energy, we assume a flat universe and that the weak energy condition is satisfied, and we allow the density of dark energy, ?X(z), to be an arbitrary function of redshift. Using simulated data from a space-based SN pencil-beam survey, we find that by optimizing the number of parameters used to parameterize the dimensionless dark energy density, f(z)=?X(z)/?X(z=0), we can obtain an unbiased estimate of both f(z) and the fractional matter density of the universe, ?m. A plausible SN pencil-beam survey (with a square degree field of view and for an observational duration of 1 yr) can yield about 2000 SNe Ia with 0<=z<=2. Such a survey in space would yield SN peak luminosities with a combined intrinsic and observational dispersion of ?(mint)=0.16 mag. We find that for such an idealized survey, ?m can be measured to 10% accuracy, and the dark energy density can be estimated to ~20% to z~1.5, and ~20%-40% to z~2, depending on the time dependence of the true dark energy density. Dark energy densities that vary more slowly can be more accurately measured. For the anticipated Supernova/Acceleration Probe (SNAP) mission, ?m can be measured to 14% accuracy, and the dark energy density can be estimated to ~20% to z~1.2. Our results suggest that SNAP may gain much sensitivity to the time dependence of the dark energy density and ?m by devoting more observational time to the central pencil-beam fields to obtain more SNe Ia at z>1.2. We use both a maximum likelihood analysis and a Monte Carlo analysis (when appropriate) to determine the errors of estimated parameters. We find that the Monte Carlo analysis gives a more accurate estimate of the dark energy density than the maximum likelihood analysis.
Robel, G.L.; Fisher, W.L.
1999-01-01
Production of and consumption by hatchery-reared tingerling (age-0) smallmouth bass Micropterus dolomieu at various simulated stocking densities were estimated with a bioenergetics model. Fish growth rates and pond water temperatures during the 1996 growing season at two hatcheries in Oklahoma were used in the model. Fish growth and simulated consumption and production differed greatly between the two hatcheries, probably because of differences in pond fertilization and mortality rates. Our results suggest that appropriate stocking density depends largely on prey availability as affected by pond fertilization and on fingerling mortality rates. The bioenergetics model provided a useful tool for estimating production at various stocking density rates. However, verification of physiological parameters for age-0 fish of hatchery-reared species is needed.
Distributed Density Estimation Based on a Mixture of Factor Analyzers in a Sensor Network
Wei, Xin; Li, Chunguang; Zhou, Liang; Zhao, Li
2015-01-01
Distributed density estimation in sensor networks has received much attention due to its broad applicability. When encountering high-dimensional observations, a mixture of factor analyzers (MFA) is taken to replace mixture of Gaussians for describing the distributions of observations. In this paper, we study distributed density estimation based on a mixture of factor analyzers. Existing estimation algorithms of the MFA are for the centralized case, which are not suitable for distributed processing in sensor networks. We present distributed density estimation algorithms for the MFA and its extension, the mixture of Student’s t-factor analyzers (MtFA). We first define an objective function as the linear combination of local log-likelihoods. Then, we give the derivation process of the distributed estimation algorithms for the MFA and MtFA in details, respectively. In these algorithms, the local sufficient statistics (LSS) are calculated at first and diffused. Then, each node performs a linear combination of the received LSS from nodes in its neighborhood to obtain the combined sufficient statistics (CSS). Parameters of the MFA and the MtFA can be obtained by using the CSS. Finally, we evaluate the performance of these algorithms by numerical simulations and application example. Experimental results validate the promising performance of the proposed algorithms. PMID:26251903
Biases in velocity and Q estimates from 3D density structure
NASA Astrophysics Data System (ADS)
P?onka, Agnieszka; Fichtner, Andreas
2015-04-01
We propose to develop a seismic tomography technique that directly inverts for density, using complete seismograms rather than arrival times of certain waves only. The first task in this challenge is to systematically study the imprints of density on synthetic seismograms. To compute the full seismic wavefield in a 3D heterogeneous medium without making significant approximations, we use numerical wave propagation based on a spectral-element discretization of the seismic wave equation. We consider a 2000 by 1000 km wide and 500 km deep spherical section, with the 1D Earth model PREM (with 40 km crust thickness) as a background. Onto this (in the uppermost 40 km) we superimpose 3D randomly generated velocity and density heterogeneities of various magnitudes and correlation lengths. We use different random realizations of heterogeneity distribution. We compare the synthetic seismograms for 3D velocity and density structure with 3D velocity structure and with the 1D background, calculating relative amplitude differences and timeshifts as functions of time and frequency. For 3D density variations of 7 % relative to PREM, the biggest time shifts reach 2.5 s, and the biggest relative amplitude differences approach 90 %. Based on the experimental changes in arrival times and amplitudes, we quantify the biases introduced in velocity and Q estimates when 3D density is not taken into account. For real data the effects may be more severe, given that commonly observed crustal velocity variations of 10-20 % suggest density variations of around 15 % in the upper crust. Our analyses indicate that reasonably sized density variations within the crust can leave a strong imprint on both traveltimes and amplitudes. While this can produce significant biases in velocity and Q estimates, the positive conclusion is that seismic waveform inversion for density may become feasible.
Density estimation of small-mammal populations using a trapping web and distance sampling methods
Anderson, David R.; Burnham, Kenneth P.; White, Gary C.; Otis, David L.
1983-01-01
Distance sampling methodology is adapted to enable animal density (number per unit of area) to be estimated from capture-recapture and removal data. A trapping web design provides the link between capture data and distance sampling theory. The estimator of density is D = Mt+1f(0), where Mt+1 is the number of individuals captured and f(0) is computed from the Mt+1 distances from the web center to the traps in which those individuals were first captured. It is possible to check qualitatively the critical assumption on which the web design and the estimator are based. This is a conceptual paper outlining a new methodology, not a definitive investigation of the best specific way to implement this method. Several alternative sampling and analysis methods are possible within the general framework of distance sampling theory; a few alternatives are discussed and an example is given.
NASA Astrophysics Data System (ADS)
Park, J.; Lühr, H.; Stolle, C.; Malhotra, G.; Baker, J. B. H.; Buchert, S.; Gill, R.
2015-07-01
Plasma convection in the high-latitude ionosphere provides important information about magnetosphere-ionosphere-thermosphere coupling. In this study we estimate the along-track component of plasma convection within and around the polar cap, using electron density profiles measured by the three Swarm satellites. The velocity values estimated from the two different satellite pairs agree with each other. In both hemispheres the estimated velocity is generally anti-sunward, especially for higher speeds. The obtained velocity is in qualitative agreement with Super Dual Auroral Radar Network data. Our method can supplement currently available instruments for ionospheric plasma velocity measurements, especially in cases where these traditional instruments suffer from their inherent limitations. Also, the method can be generalized to other satellite constellations carrying electron density probes.
Estimating effective data density in a satellite retrieval or an objective analysis
NASA Technical Reports Server (NTRS)
Purser, R. J.; Huang, H.-L.
1993-01-01
An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.
Reader Variability in Breast Density Estimation from Full-Field Digital Mammograms
Keller, Brad M.; Nathan, Diane L.; Gavenonis, Sara C.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina
2013-01-01
Rationale and Objectives Mammographic breast density, a strong risk factor for breast cancer, may be measured as either a relative percentage of dense (ie, radiopaque) breast tissue or as an absolute area from either raw (ie, “for processing”) or vendor postprocessed (ie, “for presentation”) digital mammograms. Given the increasing interest in the incorporation of mammographic density in breast cancer risk assessment, the purpose of this study is to determine the inherent reader variability in breast density assessment from raw and vendor-processed digital mammograms, because inconsistent estimates could to lead to misclassification of an individual woman’s risk for breast cancer. Materials and Methods Bilateral, mediolateral-oblique view, raw, and processed digital mammograms of 81 women were retrospectively collected for this study (N = 324 images). Mammographic percent density and absolute dense tissue area estimates for each image were obtained from two radiologists using a validated, interactive software tool. Results The variability of interreader agreement was not found to be affected by the image presentation style (ie, raw or processed, F-test: P > .5). Interreader estimates of relative and absolute breast density are strongly correlated (Pearson r > 0.84, P < .001) but systematically different (t-test, P < .001) between the two readers. Conclusion Our results show that mammographic density may be assessed with equal reliability from either raw or vendor postprocessed images. Furthermore, our results suggest that the primary source of density variability comes from the subjectivity of the individual reader in assessing the absolute amount of dense tissue present in the breast, indicating the need to use standardized tools to mitigate this effect. PMID:23465381
Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations
NASA Astrophysics Data System (ADS)
Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik
2009-04-01
Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.
Fracture density estimates in glaciogenic deposits from P-wave velocity reductions
Karaman, A.; Carpenter, P.J.
1997-01-01
Subsidence-induced fracturing of glaciogenic deposits over coal mines in the southern Illinois basis alters hydraulic properties of drift aquifers and exposes these aquifers to surface contaminants. In this study, refraction tomography surveys were used in conjunction with a generalized form of a seismic fracture density model to estimate the vertical and lateral extent of fracturing in a 12-m thick overburden of loess, clay, glacial till, and outwash above a longwall coal mine at 90 m depth. This generalized model accurately predicted fracture trends and densities from azimuthal P-wave velocity variations over unsaturated single- and dual-parallel fractures exposed at the surface. These fractures extended at least 6 m and exhibited 10--15 cm apertures at the surface. The pre- and postsubsidence velocity ratios were converted into fracture densities that exhibited qualitative agreement with the observed surface and inferred subsurface fracture distribution. Velocity reductions as large as 25% were imaged over the static tension zone of the mine where fracturing may extend to depths of 10--15 m. Finally, the seismically derived fracture density estimates were plotted as a function of subsidence-induced drawdown across the panel to estimate the average specific storage of the sand and gravel lower drift aquifer. This value was at least 20 times higher than the presubsidence (unfractured) specific storage for the same aquifer.
Estimating density dependence in time-series of age-structured populations.
Lande, R; Engen, S; Saether, B-E
2002-01-01
For a life history with age at maturity alpha, and stochasticity and density dependence in adult recruitment and mortality, we derive a linearized autoregressive equation with time-lags of from 1 to alpha years. Contrary to current interpretations, the coefficients for different time-lags in the autoregressive dynamics do not simply measure delayed density dependence, but also depend on life-history parameters. We define a new measure of total density dependence in a life history, D, as the negative elasticity of population growth rate per generation with respect to change in population size, D = - partial differential lnlambda(T)/partial differential lnN, where lambda is the asymptotic multiplicative growth rate per year, T is the generation time and N is adult population size. We show that D can be estimated from the sum of the autoregression coefficients. We estimated D in populations of six avian species for which life-history data and unusually long time-series of complete population censuses were available. Estimates of D were in the order of 1 or higher, indicating strong, statistically significant density dependence in four of the six species. PMID:12396510
Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Density estimation in a wolverine population using spatial capture-recapture models
Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.
2011-01-01
Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.
Zhang, Minling; Crocker, Robert L; Mankin, Richard W; Flanders, Kathy L; Brandhorst-Hubbard, Jamee L
2003-12-01
Incidental sounds produced by Phyllophaga crinita (Burmeister) and Cyclocephala lurida (Bland) (Coleoptera: Scarabaeidae) white grubs were monitored with single- and multiple-sensor acoustic detection systems in turf fields and golf course fairways in Texas. The maximum detection range of an individual acoustic sensor was measured in a greenhouse as approximately the area enclosed in a 26.5-cm-diameter perimeter (552 cm2). A single-sensor acoustic system was used to rate the likelihood of white grub infestation at monitored sites, and a four-sensor array was used to count the numbers of white grubs at sites where infestations were identified. White grub population densities were acoustically estimated by dividing the estimated numbers of white grubs by the area of the detection range. For comparisons with acoustic monitoring methods, infestations were assessed also by examining 10-cm-diameter soil cores collected with a standard golf cup-cutter. Both acoustic and cup-cutter assessments of infestation and estimates of white grub population densities were verified by excavation and sifting of the soil around the sensors after each site was monitored. The single-sensor acoustic method was more successful in assessing infestations at a recording site than was the cup-cutter method, possibly because the detection range was larger than the area of the soil core. White grubs were recovered from >90% of monitored sites rated at medium or high likelihood of infestation. Infestations were successfully identified at 23 of the 24 sites where white grubs were recovered at densities >50/m2, the threshold for economic damage. The four-sensor array yielded the most accurate estimates of the numbers of white grubs in the detection range, enabling reliable, nondestructive estimation of white grub population densities. However, tests with the array took longer and were more difficult to perform than tests with the single sensor. PMID:14977114
Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand
2010-01-01
We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876
A lattice estimation approach for the automatic evaluation of corneal endothelium density.
Grisan, Enrico; Paviotti, Anna; Laurenti, Nicola; Ruggeri, Alfredo
2005-01-01
The analysis of microscopy images of corneal endothelium is routinely carried out at eye banks to assess cell density, one of the main indicators of cornea health state and quality. We propose here a new method to derive endothelium cell density that, at variance with most of the available techniques, does not require the identification of cell contours. It exploits the feature that endothelium cells are approximately laid out as a regular tessellation of hexagonal shapes. This technique estimates the inverse transpose of a matrix generating this cellular lattice, from which the density is easily obtained. The algorithm has been implemented in a Matlab prototype and tested on a set of 21 corneal endothelium images. The cell densities obtained matched quite well with the ones manually estimated by eye-bank experts: the percent difference between them was on average -0.1% (6.5% for absolute values). Albeit the performances of this new algorithm on the images of our test set are definitely good, a careful evaluation on a much larger data set is needed before any clinical application of the proposed technique could be envisaged. The collection of an adequate number of endothelium images and of their manual densities is currently in progress. PMID:17282540
Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette
2010-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.
Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette
2010-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ? 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ? 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Somershoe, S.G.; Twedt, D.J.; Reid, B.
2006-01-01
We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.
Moment series for moment estimators of the parameters of a Weibull density
Bowman, K.O.; Shenton, L.R.
1982-01-01
Taylor series for the first four moments of the coefficients of variation in sampling from a 2-parameter Weibull density are given: they are taken as far as the coefficient of n/sup -24/. From these a four moment approximating distribution is set up using summatory techniques on the series. The shape parameter is treated in a similar way, but here the moment equations are no longer explicit estimators, and terms only as far as those in n/sup -12/ are given. The validity of assessed moments and percentiles of the approximating distributions is studied. Consideration is also given to properties of the moment estimator for 1/c.
Bilgin, Ali
://carlos.wustl.edu. [27] A. Said and W. A. Pearlman, "A new fast and efficient image codec based on set partitioning method," in Visual Communications and Image Processing '96, Proc. of SPIE 2727, Mar. 1996, pp. 1302[21] V. T. Franques and V. K. Jain, "Enhanced wavelet-based zerotree coding of images," in Proc
Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing
2014-11-01
Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315
Liu, Xin; Wang, Hongkai; Xu, Mantao; Nie, Shengdong; Lu, Hongbing
2014-01-01
Single-view x-ray luminescence computed tomography (XLCT) imaging has short data collection time that allows non-invasively and fast resolving the three-dimensional (3-D) distribution of x-ray-excitable nanophosphors within small animal in vivo. However, the single-view reconstruction suffers from a severe ill-posed problem because only one angle data is used in the reconstruction. To alleviate the ill-posedness, in this paper, we propose a wavelet-based reconstruction approach, which is achieved by applying a wavelet transformation to the acquired singe-view measurements. To evaluate the performance of the proposed method, in vivo experiment was performed based on a cone beam XLCT imaging system. The experimental results demonstrate that the proposed method cannot only use the full set of measurements produced by CCD, but also accelerate image reconstruction while preserving the spatial resolution of the reconstruction. Hence, it is suitable for dynamic XLCT imaging study. PMID:25426315
NASA Astrophysics Data System (ADS)
Sondhiya, Deepak Kumar; Gwal, Ashok Kumar; Verma, Shivali; Kasde, Satish Kumar
Abstract: In this paper, a wavelet-based neural network system for the detection and identification of four types of VLF whistler’s transients (i.e. dispersive, diffuse, spiky and multipath) is implemented and tested. The discrete wavelet transform (DWT) technique is integrated with the feed forward neural network (FFNN) model to construct the identifier. First, the multi-resolution analysis (MRA) technique of DWT and the Parseval’s theorem are employed to extract the characteristics features of the transients at different resolution levels. Second, the FFNN identifies these extracted features to identify the transients according to the features extracted. The proposed methodology can reduce a great quantity of the features of transients without losing its original property; less memory space and computing time are required. Various transient events are tested; the results show that the identifier can detect whistler transients efficiently. Keywords: Discrete wavelets transform, Multi-resolution analysis, Parseval’s theorem and Feed forward neural network
Hariharan, G
2014-05-01
In this paper, a wavelet-based approximation method is introduced for solving the Newell-Whitehead (NW) and Allen-Cahn (AC) equations. To the best of our knowledge, until now there is no rigorous Legendre wavelets solution has been reported for the NW and AC equations. The highest derivative in the differential equation is expanded into Legendre series, this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the aforesaid equations are converted into an algebraic system. Block pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method. PMID:24599524
Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers
NASA Astrophysics Data System (ADS)
Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.
2013-12-01
The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p < 0.05). The frequency domain analysis showed an increase in the LF and LF/HF components with a subsequent decrease in the HF component. The HRV features were analyzed for classification of the smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.
A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis
NASA Astrophysics Data System (ADS)
Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.
2014-09-01
This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.
Electron density estimation in cold magnetospheric plasmas with the Cluster Active Archive
NASA Astrophysics Data System (ADS)
Masson, A.; Pedersen, A.; Taylor, M. G.; Escoubet, C. P.; Laakso, H. E.
2009-12-01
Electron density is a key physical quantity to characterize any plasma medium. Its measurement is thus essential to understand the various physical processes occurring in the environment of a magnetized planet. However, any magnetosphere of the solar system is far from being an homogeneous medium with a constant electron density and temperature. For instance, the Earth’s magnetosphere is composed of a variety of regions with densities and temperatures spanning over at least 6 decades of magnitude. For this reason, different types of scientific instruments are usually carried onboard a magnetospheric spacecraft to estimate in situ the electron density of the various plasma regions crossed by different means. In the case of the European Space Agency Cluster mission, five different instruments on each of its four identical spacecraft can be used to estimate it: two particle instruments, a DC electric field instrument, a relaxation sounder and a high-time resolution passive wave receiver. Each of these instruments has its pros and cons depending on the plasma conditions. The focus of this study is the accurate estimation of the electron density in cold plasma regions of the magnetosphere including the magnetotail lobes (Ne ? 0.01 e-/cc, Te ~ 100 eV) and the plasmasphere (Ne> 10 e-/cc, Te <10 eV). In these regions, particle instruments can be blind to low energy ions outflowing from the ionosphere or measuring only a portion of the energy range of the particles due to photoelectrons. This often results in an under estimation of the bulk density. Measurements from a relaxation sounder enables accurate estimation of the bulk electron density above a fraction of 1 e-/cc but requires careful calibration of the resonances and/or the cutoffs detected. On Cluster, active soundings enable to derive precise density estimates between 0.2 and 80 e-/cc every minute or two. Spacecraft-to-probe difference potential measurements from a double probe electric field experiment can be calibrated against the above mentionned types of measurements to derive bulk electron densities with a time resolution below 1 s. Such an in-flight calibration procedure has been performed successfully on past magnetospheric missions such as GEOS, ISEE-1, Viking, Geotail, CRRES or FAST. We will first present the outcome of this calibration procedure for the Cluster mission for plasma conditions encountered in the plasmasphere, the magnetotail lobes and the polar caps. This study is based on the use of the Cluster Active Archive (CAA) for data collected in the plasmasphere. CAA offers the unique possibility to easily access the best calibrated data collected by all experiments on the Cluster satellites over their several years in orbit. This has enabled in particular to take into account the impact of the solar activity in the calibration procedure. Recent science nuggets based on these calibrated data will then be presented showing in particular the outcome of the three dimensional (3D) electron density mapping of the magnetotail lobes over several years.
A method for estimating the height of a mesospheric density level using meteor radar
NASA Astrophysics Data System (ADS)
Younger, J. P.; Reid, I. M.; Vincent, R. A.; Murphy, D. J.
2015-07-01
A new technique for determining the height of a constant density surface at altitudes of 78-85 km is presented. The first results are derived from a decade of observations by a meteor radar located at Davis Station in Antarctica and are compared with observations from the Microwave Limb Sounder instrument aboard the Aura satellite. The density of the neutral atmosphere in the mesosphere/lower thermosphere region around 70-110 km is an essential parameter for interpreting airglow-derived atmospheric temperatures, planning atmospheric entry maneuvers of returning spacecraft, and understanding the response of climate to different stimuli. This region is not well characterized, however, due to inaccessibility combined with a lack of consistent strong atmospheric radar scattering mechanisms. Recent advances in the analysis of detection records from high-performance meteor radars provide new opportunities to obtain atmospheric density estimates at high time resolutions in the MLT region using the durations and heights of faint radar echoes from meteor trails. Previous studies have indicated that the expected increase in underdense meteor radar echo decay times with decreasing altitude is reversed in the lower part of the meteor ablation region due to the neutralization of meteor plasma. The height at which the gradient of meteor echo decay times reverses is found to occur at a fixed atmospheric density. Thus, the gradient reversal height of meteor radar diffusion coefficient profiles can be used to infer the height of a constant density level, enabling the observation of mesospheric density variations using meteor radar.
Stewart, Robert N; White, Devin A; Urban, Marie L; Morton, April M; Webster, Clayton G; Stoyanov, Miroslav K; Bright, Eddie A; Bhaduri, Budhendra L
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.
2012-06-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.
Tangkaratt, Voot; Xie, Ning; Sugiyama, Masashi
2015-01-01
Regression aims at estimating the conditional mean of output given input. However, regression is not informative enough if the conditional density is multimodal, heteroskedastic, and asymmetric. In such a case, estimating the conditional density itself is preferable, but conditional density estimation (CDE) is challenging in high-dimensional space. A naive approach to coping with high dimensionality is to first perform dimensionality reduction (DR) and then execute CDE. However, a two-step process does not perform well in practice because the error incurred in the first DR step can be magnified in the second CDE step. In this letter, we propose a novel single-shot procedure that performs CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR as the problem of minimizing a squared-loss variant of conditional entropy, and this is solved using CDE. Thus, an additional CDE step is not needed after DR. We demonstrate the usefulness of the proposed method through extensive experiments on various data sets, including humanoid robot transition and computer art. PMID:25380340
Fakhar, Kaihan; Hastings, Erin; Butson, Christopher R.; Foote, Kelly D.; Zeilman, Pam; Okun, Michael S.
2013-01-01
Objective We aimed in this investigation to study deep brain stimulation (DBS) battery drain with special attention directed toward patient symptoms prior to and following battery replacement. Background Previously our group developed web-based calculators and smart phone applications to estimate DBS battery life (http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator). Methods A cohort of 320 patients undergoing DBS battery replacement from 2002–2012 were included in an IRB approved study. Statistical analysis was performed using SPSS 20.0 (IBM, Armonk, NY). Results The mean charge density for treatment of Parkinson’s disease was 7.2 µC/cm2/phase (SD?=?3.82), for dystonia was 17.5 µC/cm2/phase (SD?=?8.53), for essential tremor was 8.3 µC/cm2/phase (SD?=?4.85), and for OCD was 18.0 µC/cm2/phase (SD?=?4.35). There was a significant relationship between charge density and battery life (r?=??.59, p<.001), as well as total power and battery life (r?=??.64, p<.001). The UF estimator (r?=?.67, p<.001) and the Medtronic helpline (r?=?.74, p<.001) predictions of battery life were significantly positively associated with actual battery life. Battery status indicators on Soletra and Kinetra were poor predictors of battery life. In 38 cases, the symptoms improved following a battery change, suggesting that the neurostimulator was likely responsible for symptom worsening. For these cases, both the UF estimator and the Medtronic helpline were significantly correlated with battery life (r?=?.65 and r?=?.70, respectively, both p<.001). Conclusions Battery estimations, charge density, total power and clinical symptoms were important factors. The observation of clinical worsening that was rescued following neurostimulator replacement reinforces the notion that changes in clinical symptoms can be associated with battery drain. PMID:23536810
Kernel density estimation and K-means clustering to profile road accident hotspots.
Anderson, Tessa K
2009-05-01
Identifying road accident hotspots is a key role in determining effective strategies for the reduction of high density areas of accidents. This paper presents (1) a methodology using Geographical Information Systems (GIS) and Kernel Density Estimation to study the spatial patterns of injury related road accidents in London, UK and (2) a clustering methodology using environmental data and results from the first section in order to create a classification of road accident hotspots. The use of this methodology will be illustrated using the London area in the UK. Road accident data collected by the Metropolitan Police from 1999 to 2003 was used. A kernel density estimation map was created and subsequently disaggregated by cell density to create a basic spatial unit of an accident hotspot. Appended environmental data was then added to the hotspot cells and using K-means clustering, an outcome of similar hotspots was deciphered. Five groups and 15 clusters were created based on collision and attribute data. These clusters are discussed and evaluated according to their robustness and potential uses in road safety campaigning. PMID:19393780
Sadeh, Iftach; Lahav, Ofer
2015-01-01
We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister and Lahav (2004). Large photometric galaxy surveys are important for cosmological studies, and in particular for characterizing the nature of dark energy. The success of such surveys greatly depends on the ability to measure photo-zs, based on limited spectral data. ANNz2 utilizes multiple machine learning methods, such as artificial neural networks, boosted decision/regression trees and k-nearest neighbours. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solution...
Validation tests of an improved kernel density estimation method for identifying disease clusters
NASA Astrophysics Data System (ADS)
Cai, Qiang; Rushton, Gerard; Bhaduri, Budhendra
2012-07-01
The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to reduce estimation bias; and a parameter selection tool to enhance disease cluster detection performance, measured by sensitivity, specificity, and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high-resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.
Density-dependent analysis of nonequilibrium paths improves free energy estimates
Minh, David D. L.
2009-01-01
When a system is driven out of equilibrium by a time-dependent protocol that modifies the Hamiltonian, it follows a nonequilibrium path. Samples of these paths can be used in nonequilibrium work theorems to estimate equilibrium quantities such as free energy differences. Here, we consider analyzing paths generated with one protocol using another one. It is posited that analysis protocols which minimize the lag, the difference between the nonequilibrium and the instantaneous equilibrium densities, will reduce the dissipation of reprocessed trajectories and lead to better free energy estimates. Indeed, when minimal lag analysis protocols based on exactly soluble propagators or relative entropies are applied to several test cases, substantial gains in the accuracy and precision of estimated free energy differences are observed. PMID:19485432
Pedotransfer functions for Irish soils - estimation of bulk density (?b) per horizon type
NASA Astrophysics Data System (ADS)
Reidy, B.; Simo, I.; Sills, P.; Creamer, R. E.
2015-10-01
Soil bulk density is a key property in defining soil characteristics. It describes the packing structure of the soil and is also essential for the measurement of soil carbon stock and nutrient assessment. In many older surveys this property was neglected and in many modern surveys this property is omitted due to cost both in laboratory and labour and in cases where the core method cannot be applied. To overcome these oversights pedotransfer functions are applied using other known soil properties to estimate bulk density. Pedotransfer functions have been derived from large international datasets across many studies, with their own inherent biases, many ignoring horizonation and depth variances. Initially pedotransfer functions from the literature were used to predict different horizon types using local known bulk density datasets. Then the best performing of the pedotransfer functions, were selected to recalibrate and then were validated again using the known data. The predicted co-efficient of determination was 0.5 or greater in 12 of the 17 horizon types studied. These new equations allowed gap filling where bulk density data was missing in part or whole soil profiles. This then allowed the development of an indicative soil bulk density map for Ireland at 0-30 and 30-50 cm horizon depths. In general the horizons with the largest known datasets had the best predictions, using the recalibrated and validated pedotransfer functions.
Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R; Voytek, Mary; Olson, Deanna H; Kirshtein, Julie
2014-01-01
Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L(-1). The highest density observed was ?3 million zoospores L(-1). We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122
Chestnut, Tara; Anderson, Chauncey; Popa, Radu; Blaustein, Andrew R.; Voytek, Mary; Olson, Deanna H.; Kirshtein, Julie
2014-01-01
Biodiversity losses are occurring worldwide due to a combination of stressors. For example, by one estimate, 40% of amphibian species are vulnerable to extinction, and disease is one threat to amphibian populations. The emerging infectious disease chytridiomycosis, caused by the aquatic fungus Batrachochytrium dendrobatidis (Bd), is a contributor to amphibian declines worldwide. Bd research has focused on the dynamics of the pathogen in its amphibian hosts, with little emphasis on investigating the dynamics of free-living Bd. Therefore, we investigated patterns of Bd occupancy and density in amphibian habitats using occupancy models, powerful tools for estimating site occupancy and detection probability. Occupancy models have been used to investigate diseases where the focus was on pathogen occurrence in the host. We applied occupancy models to investigate free-living Bd in North American surface waters to determine Bd seasonality, relationships between Bd site occupancy and habitat attributes, and probability of detection from water samples as a function of the number of samples, sample volume, and water quality. We also report on the temporal patterns of Bd density from a 4-year case study of a Bd-positive wetland. We provide evidence that Bd occurs in the environment year-round. Bd exhibited temporal and spatial heterogeneity in density, but did not exhibit seasonality in occupancy. Bd was detected in all months, typically at less than 100 zoospores L?1. The highest density observed was ?3 million zoospores L?1. We detected Bd in 47% of sites sampled, but estimated that Bd occupied 61% of sites, highlighting the importance of accounting for imperfect detection. When Bd was present, there was a 95% chance of detecting it with four samples of 600 ml of water or five samples of 60 mL. Our findings provide important baseline information to advance the study of Bd disease ecology, and advance our understanding of amphibian exposure to free-living Bd in aquatic habitats over time. PMID:25222122
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845
Adaptive bandwidth kernel density estimation for next-generation sequencing data
2013-01-01
Background High-throughput sequencing experiments can be viewed as measuring some sort of a "genomic signal" that may represent a biological event such as the binding of a transcription factor to the genome, locations of chromatin modifications, or even a background or control condition. Numerous algorithms have been developed to extract different kinds of information from such data. However, there has been very little focus on the reconstruction of the genomic signal itself. Such reconstructions may be useful for a variety of purposes ranging from simple visualization of the signals to sophisticated comparison of different datasets. Methods Here, we propose that adaptive-bandwidth kernel density estimators are well-suited for genomic signal reconstructions. This class of estimators is a natural extension of the fixed-bandwidth estimators that have been employed in several existing ChIP-Seq analysis programs. Results Using a set of ChIP-Seq datasets from the ENCODE project, we show that adaptive-bandwidth estimators have greater accuracy at signal reconstruction compared to fixed-bandwidth estimators, and that they have significant advantages in terms of visualization as well. For both fixed and adaptive-bandwidth schemes, we demonstrate that smoothing parameters can be set automatically using a held-out set of tuning data. We also carry out a computational complexity analysis of the different schemes and confirm through experimentation that the necessary computations can be readily carried out on a modern workstation without any significant issues. PMID:24564977
Boersen, M.R.; Clark, J.D.; King, T.L.
2003-01-01
The Recovery Plan for the federally threatened Louisiana black bear (Ursus americanus luteolus) mandates that remnant populations be estimated and monitored. In 1999 we obtained genetic material with barbed-wire hair traps to estimate bear population size and genetic diversity at the 329-km2 Tensas River Tract, Louisiana. We constructed and monitored 122 hair traps, which produced 1,939 hair samples. Of those, we randomly selected 116 subsamples for genetic analysis and used up to 12 microsatellite DNA markers to obtain multilocus genotypes for 58 individuals. We used Program CAPTURE to compute estimates of population size using multiple mark-recapture models. The area of study was almost entirely circumscribed by agricultural land, thus the population was geographically closed. Also, study-area boundaries were biologically discreet, enabling us to accurately estimate population density. Using model Chao Mh to account for possible effects of individual heterogeneity in capture probabilities, we estimated the population size to be 119 (SE=29.4) bears, or 0.36 bears/km2. We were forced to examine a substantial number of loci to differentiate between some individuals because of low genetic variation. Despite the probable introduction of genes from Minnesota bears in the 1960s, the isolated population at Tensas exhibited characteristics consistent with inbreeding and genetic drift. Consequently, the effective population size at Tensas may be as few as 32, which warrants continued monitoring or possibly genetic augmentation.
Interference by pigment in the estimation of microalgal biomass concentration by optical density.
Griffiths, Melinda J; Garcin, Clive; van Hille, Robert P; Harrison, Susan T L
2011-05-01
Optical density is used as a convenient indirect measurement of biomass concentration in microbial cell suspensions. Absorbance of light by a suspension can be related directly to cell density using a suitable standard curve. However, inaccuracies can be introduced when the pigment content of the cells changes. Under the culture conditions used, pigment content of the microalga Chlorella vulgaris varied between 0.5 and 5.5% of dry weight with age and culture conditions. This led to significant errors in biomass quantification over the course of a growth cycle, due to the change in absorbance. Using a standard curve generated at a single time point in the growth cycle to calculate dry weight (dw) from optical density led to average relative errors across the growth cycle, relative to actual dw, of between 9 and 18% at 680 nm and 5 and 13% at 750 nm. When a standard curve generated under low pigment conditions was used to estimate biomass under normal pigment conditions, average relative errors in biomass estimation relative to actual dw across the growth cycle were 52% at 680 nm and 25% at 750 nm. Similar results were found with Scenedesmus, Spirulina and Nannochloropsis. Suggested strategies to minimise error include selection of a wavelength that minimises absorbance by the pigment, e.g. 750 nm where chlorophyll is the dominant pigment, and generation of a standard curve towards the middle, or across the entire, growth cycle. PMID:21329736
Lin, Yu-Pin; Chu, Hone-Jay; Wu, Chen-Fa; Chang, Tsun-Kuo; Chen, Chiu-Yang
2011-01-01
Concentrations of four heavy metals (Cr, Cu, Ni, and Zn) were measured at 1,082 sampling sites in Changhua county of central Taiwan. A hazard zone is defined in the study as a place where the content of each heavy metal exceeds the corresponding control standard. This study examines the use of spatial analysis for identifying multiple soil pollution hotspots in the study area. In a preliminary investigation, kernel density estimation (KDE) was a technique used for hotspot analysis of soil pollution from a set of observed occurrences of hazards. In addition, the study estimates the hazardous probability of each heavy metal using geostatistical techniques such as the sequential indicator simulation (SIS) and indicator kriging (IK). Results show that there are multiple hotspots for these four heavy metals and they are strongly correlated to the locations of industrial plants and irrigation systems in the study area. Moreover, the pollution hotspots detected using the KDE are the almost same to those estimated using IK or SIS. Soil pollution hotspots and polluted sampling densities are clearly defined using the KDE approach based on contaminated point data. Furthermore, the risk of hazards is explored by these techniques such as KDE and geostatistical approaches and the hotspot areas are captured without requiring exhaustive sampling anywhere. PMID:21318015
Wavelet-Based Trend Detection and Estimation Peter F. Craigmile1 and Donald B. Percival2,3.
Percival, Don
measurements over the past century due to global warming). The recent advent of wavelets as a tool for timeDepartment of Statistics, Box 354322, University of Washington, Seattle. WA 981954322. 2Applied Physics
NASA Astrophysics Data System (ADS)
Waters, Daniel F.; Cadou, Christopher P.
2014-02-01
A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (˜15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.
NASA Technical Reports Server (NTRS)
Justh, Hilary L.; Justus, C. G.
2009-01-01
A recent study (Desai, 2008) has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers (Spirit and Opportunity) and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing Desai's reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon entry, descent and landing. Desai's results have raised a question as to whether there is a systemic problem within Mars atmospheric models. Proposal is to compare Mars atmospheric density estimates from Mars atmospheric models to measurements made by Mars Global Surveyor (MGS). Comparison study requires the completion of several tasks that would result in a greater understanding of reasons behind the discrepancy found during recent landings on Mars and possible solutions to this problem.
Automated voxelization of 3D atom probe data through kernel density estimation.
Srinivasan, Srikant; Kaluskar, Kaustubh; Dumpala, Santoshrupa; Broderick, Scott; Rajan, Krishna
2015-12-01
Identifying nanoscale chemical features from atom probe tomography (APT) data routinely involves adjustment of voxel size as an input parameter, through visual supervision, making the final outcome user dependent, reliant on heuristic knowledge and potentially prone to error. This work utilizes Kernel density estimators to select an optimal voxel size in an unsupervised manner to perform feature selection, in particular targeting resolution of interfacial features and chemistries. The capability of this approach is demonstrated through analysis of the ? / ?' interface in a Ni-Al-Cr superalloy. PMID:25825028
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
NASA Astrophysics Data System (ADS)
Edwards, Matthew C.; Meyer, Renate; Christensen, Nelson
2015-09-01
The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a blocked Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with nonstationary data by breaking longer data streams into smaller and locally stationary components.
ANNz2 - Photometric redshift and probability density function estimation using machine-learning
NASA Astrophysics Data System (ADS)
Sadeh, Iftach
2014-05-01
Large photometric galaxy surveys allow the study of questions at the forefront of science, such as the nature of dark energy. The success of such surveys depends on the ability to measure the photometric redshifts of objects (photo-zs), based on limited spectral data. A new major version of the public photo-z estimation software, ANNz , is presented here. The new code incorporates several machine-learning methods, such as artificial neural networks and boosted decision/regression trees, which are all used in concert. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions in two independent ways.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.
2012-12-01
We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Langlois, Timothy J; Fitzpatrick, Benjamin R; Fairclough, David V; Wakefield, Corey B; Hesp, S Alex; McLean, Dianne L; Harvey, Euan S; Meeuwig, Jessica J
2012-01-01
Age structure data is essential for single species stock assessments but length-frequency data can provide complementary information. In south-western Australia, the majority of these data for exploited species are derived from line caught fish. However, baited remote underwater stereo-video systems (stereo-BRUVS) surveys have also been found to provide accurate length measurements. Given that line fishing tends to be biased towards larger fish, we predicted that, stereo-BRUVS would yield length-frequency data with a smaller mean length and skewed towards smaller fish than that collected by fisheries-independent line fishing. To assess the biases and selectivity of stereo-BRUVS and line fishing we compared the length-frequencies obtained for three commonly fished species, using a novel application of the Kernel Density Estimate (KDE) method and the established Kolmogorov-Smirnov (KS) test. The shape of the length-frequency distribution obtained for the labrid Choerodon rubescens by stereo-BRUVS and line fishing did not differ significantly, but, as predicted, the mean length estimated from stereo-BRUVS was 17% smaller. Contrary to our predictions, the mean length and shape of the length-frequency distribution for the epinephelid Epinephelides armatus did not differ significantly between line fishing and stereo-BRUVS. For the sparid Pagrus auratus, the length frequency distribution derived from the stereo-BRUVS method was bi-modal, while that from line fishing was uni-modal. However, the location of the first modal length class for P. auratus observed by each sampling method was similar. No differences were found between the results of the KS and KDE tests, however, KDE provided a data-driven method for approximating length-frequency data to a probability function and a useful way of describing and testing any differences between length-frequency samples. This study found the overall size selectivity of line fishing and stereo-BRUVS were unexpectedly similar. PMID:23209547
Gonzalez, Ruben; Huang, Biao; Lau, Eric
2015-09-01
Principal component analysis has been widely used in the process industries for the purpose of monitoring abnormal behaviour. The process of reducing dimension is obtained through PCA, while T-tests are used to test for abnormality. Some of the main contributions to the success of PCA is its ability to not only detect problems, but to also give some indication as to where these problems are located. However, PCA and the T-test make use of Gaussian assumptions which may not be suitable in process fault detection. A previous modification of this method is the use of independent component analysis (ICA) for dimension reduction combined with kernel density estimation for detecting abnormality; like PCA, this method points out location of the problems based on linear data-driven methods, but without the Gaussian assumptions. Both ICA and PCA, however, suffer from challenges in interpreting results, which can make it difficult to quickly act once a fault has been detected online. This paper proposes the use of Bayesian networks for dimension reduction which allows the use of process knowledge enabling more intelligent dimension reduction and easier interpretation of results. The dimension reduction technique is combined with multivariate kernel density estimation, making this technique effective for non-linear relationships with non-Gaussian variables. The performance of PCA, ICA and Bayesian networks are compared on data from an industrial scale plant. PMID:25930233
Gene Ontology density estimation and discourse analysis for automatic GeneRiF extraction
Gobeill, Julien; Tbahriti, Imad; Ehrler, Frédéric; Mottaz, Anaïs; Veuthey, Anne-Lise; Ruch, Patrick
2008-01-01
Background This paper describes and evaluates a sentence selection engine that extracts a GeneRiF (Gene Reference into Functions) as defined in ENTREZ-Gene based on a MEDLINE record. Inputs for this task include both a gene and a pointer to a MEDLINE reference. In the suggested approach we merge two independent sentence extraction strategies. The first proposed strategy (LASt) uses argumentative features, inspired by discourse-analysis models. The second extraction scheme (GOEx) uses an automatic text categorizer to estimate the density of Gene Ontology categories in every sentence; thus providing a full ranking of all possible candidate GeneRiFs. A combination of the two approaches is proposed, which also aims at reducing the size of the selected segment by filtering out non-content bearing rhetorical phrases. Results Based on the TREC-2003 Genomics collection for GeneRiF identification, the LASt extraction strategy is already competitive (52.78%). When used in a combined approach, the extraction task clearly shows improvement, achieving a Dice score of over 57% (+10%). Conclusions Argumentative representation levels and conceptual density estimation using Gene Ontology contents appear complementary for functional annotation in proteomics. PMID:18426554
Accuracy of estimated geometric parameters of trees depending on the LIDAR data density
NASA Astrophysics Data System (ADS)
Hadas, Edyta; Estornell, Javier
2015-04-01
The estimation of dendrometric variables has become important for spatial planning and agriculture projects. Because classical field measurements are time consuming and inefficient, airborne LiDAR (Light Detection and Ranging) measurements are successfully used in this area. Point clouds acquired for relatively large areas allows to determine the structure of forestry and agriculture areas and geometrical parameters of individual trees. In this study two LiDAR datasets with different densities were used: sparse with average density of 0.5pt/m2 and the dense with density of 4pt/m2. 25 olive trees were selected and field measurements of tree height, crown bottom height, length of crown diameters and tree position were performed. To determine the tree geometric parameters from LiDAR data, two independent strategies were developed that utilize the ArcGIS, ENVI and FUSION software. Strategy a) was based on canopy surface model (CSM) slicing at 0.5m height and in strategy b) minimum bounding polygons as tree crown area were created around detected tree centroid. The individual steps were developed to be applied also in automatic processing. To assess the performance of each strategy with both point clouds, the differences between the measured and estimated geometric parameters of trees were analyzed. As expected, the tree height were underestimated for both strategies (RMSE=0.7m for dense dataset and RMSE=1.5m for sparse) and tree crown height were overestimated (RMSE=0.4m and RMSE=0.7m for dense and sparse dataset respectively). For dense dataset, strategy b) allows to determine more accurate crown diameters (RMSE=0.5m) than strategy a) (RMSE=0.8m), and for sparse dataset, only strategy a) occurs to be relevant (RMSE=1.0m). The accuracy of strategies were also examined for their dependency on tree size. For dense dataset, the larger the tree (height or crown longer diameter), the higher was the error of estimated tree height, and for sparse dataset, the larger the tree, the higher was the error of estimated crown bottom height. Finally, the spatial distribution of points inside the tree crown was analyzed, by creating a normalized tree crown. It confirms a high concentration of LiDAR points inside the central part of a tree.
Shimizu, Noritaka; Futamura, Yasunori; Sakurai, Tetsuya; Mizusaki, Takahiro; Otsuka, Takaharu
2015-01-01
We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of $J^\\pi=2^+$ and $2^-$ states in $^{58}$Ni in a unified manner.
Noritaka Shimizu; Yutaka Utsuno; Yasunori Futamura; Tetsuya Sakurai; Takahiro Mizusaki; Takaharu Otsuka
2015-11-21
We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of $J^\\pi=2^+$ and $2^-$ states in $^{58}$Ni in a unified manner.
Krucker, Saem; Raftery, Claire L.; Hudson, Hugh S.
2011-06-10
We report on Transition Region And Coronal Explorer 171 A observations of the GOES X20 class flare on 2001 April 2 that shows EUV flare ribbons with intense diffraction patterns. Between the 11th to 14th order, the diffraction patterns of the compact flare ribbon are dispersed into two sources. The two sources are identified as emission from the Fe IX line at 171.1 A and the combined emission from Fe X lines at 174.5, 175.3, and 177.2 A. The prominent emission of the Fe IX line indicates that the EUV-emitting ribbon has a strong temperature component near the lower end of the 171 A temperature response ({approx}0.6-1.5 MK). Fitting the observation with an isothermal model, the derived temperature is around 0.65 MK. However, the low sensitivity of the 171 A filter to high-temperature plasma does not provide estimates of the emission measure for temperatures above {approx}1.5 MK. Using the derived temperature of 0.65 MK, the observed 171 A flux gives a density of the EUV ribbon of 3 x 10{sup 11} cm{sup -3}. This density is much lower than the density of the hard X-ray producing region ({approx}10{sup 13} to 10{sup 14} cm{sup -3}) suggesting that the EUV sources, though closely related spatially, lie at higher altitudes.
Yang, Shanshan; Zheng, Fang; Luo, Xin; Cai, Suxian; Wu, Yunfeng; Liu, Kaizhi; Wu, Meihong; Chen, Jian; Krishnan, Sridhar
2014-01-01
Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD), and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS) and kernel principal component analysis (KPCA) methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA) was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP) decision rule and support vector machine (SVM) with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC) curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified. PMID:24586406
Wang, Shuihua; Chen, Mengmeng; Li, Yang; Zhang, Yudong; Han, Liangxiu; Wu, Jane; Du, Sidan
2015-01-01
Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer's disease, Parkinson's diseases, and autism). In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN) for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby). We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines. PMID:26692046
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)
2001-01-01
The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.
A Bayesian Hierarchical Model for Estimation of Abundance and Spatial Density of Aedes aegypti
Villela, Daniel A. M.; Codeço, Claudia T.; Figueiredo, Felipe; Garcia, Gabriela A.; Maciel-de-Freitas, Rafael; Struchiner, Claudio J.
2015-01-01
Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals’ refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80–1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3–5.9). The hierarchical model also performed better than the commonly used Fisher-Ford’s method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and its knowledge deemed crucial to predict the fate of transmission control strategies based on the replacement of vector populations. PMID:25906323
Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data
Dorazio, Robert M.
2013-01-01
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.
Barley, Mark H; Topping, David O; McFiggans, Gordon
2013-04-25
In order to model the properties and chemical composition of secondary organic aerosol (SOA), estimated physical property data for many thousands of organic compounds are required. Seven methods for estimating liquid density are assessed against experimental data for a test set of 56 multifunctional organic compounds. The group contribution method of Schroeder coupled with the Rackett equation using critical properties by Nannoolal was found to give the best liquid density values for this test set. During this work some problems with the representation of certain groups (aromatic amines and phenols) within the critical property estimation methods were identified, highlighting the importance (and difficulties) of deriving the parameters of group contribution methods from good quality experimental data. A selection of the estimation methods are applied to the 2742 compounds of an atmospheric chemistry mechanism, which showed that they provided consistent liquid density values for compounds with such atmospherically important (but poorly studied) functional groups as hydroperoxide, peroxide, peroxyacid, and PAN. Estimated liquid density values are also presented for a selection of compounds predicted to be important in atmospheric SOA. Hygroscopic growth factor (a property expected to depend on liquid density) has been calculated for a wide range of particle compositions. A low sensitivity of the growth factor to liquid density was found, and a single density value of 1350 kg·m(-3) could be used for all multicomponent SOA in the calculation of growth factors for comparison with experimentally measured values in the laboratory or the field without incurring significant error. PMID:23506155
Estimate of density-of-states changes with strain in A15 Nb3Sn superconductors
NASA Astrophysics Data System (ADS)
Qiao, Li; Yang, Lin; Song, Jie
2015-07-01
The experimental datasets are analyzed which show that the bare density of states N(EF) changes dramatically, as does the superconducting transition temperature Tc, in Nb3Sn samples which are strained in different states and levels. By taking into account the strain induced change in the electron-phonon coupling strength, the density of states as function of strain is estimated via a formula deduced from the strong-coupling modifications to the theory of type-II superconductivity. The results of the analysis indicate that (i) as the Nb3Sn material undergoes external axial strain ?, the value of N(EF) decreases 15% as Tc varies from ?17.4 to ?16.6 K; (ii) the N(EF)-? curve exhibits a changing asymmetry of shape, in qualitative agreement with a recent first principle calculations; (iii) the relationship between the density of states and the superconducting transition temperature in strained A15 Nb3Sn strands shows significant difference between tensile and compression loads, while for the trend of the strain-induced drop in electron-phonon coupling strength versus Tc of distorted Nb3Sn sample under different stress conditions, the curves show consistency in a wide strain range. A general model for characterizing the effect of strain states on the N(EF) in A15 Nb3Sn superconductors is suggested, and the density of states behavior in different modes of deformation can be well described with the modeling formalism. The present results are useful in order to understand the origin of the strain sensitivity of the superconducting properties of A15 Nb3Sn superconductor, and develop a comprehensive theory describing the strain tensor-dependent superconducting behavior of A15 Nb3Sn strands.
--displays that represent quantitative information with sound--have the potential to make data (and therefore science) more13 Data Density and Trend Reversals in Auditory Graphs: Effects on Point-Estimation and Trend presented per second) and the number of trend reversals for both point-estimation and trend
The EM Method in a Probabilistic Wavelet-Based MRI Denoising
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
The EM Method in a Probabilistic Wavelet-Based MRI Denoising.
Martin-Fernandez, Marcos; Villullas, Sergio
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
Cusack, Jeremy J; Swanson, Alexandra; Coulson, Tim; Packer, Craig; Carbone, Chris; Dickman, Amy J; Kosmala, Margaret; Lintott, Chris; Rowcliffe, J Marcus
2015-01-01
The random encounter model (REM) is a novel method for estimating animal density from camera trap data without the need for individual recognition. It has never been used to estimate the density of large carnivore species, despite these being the focus of most camera trap studies worldwide. In this context, we applied the REM to estimate the density of female lions (Panthera leo) from camera traps implemented in Serengeti National Park, Tanzania, comparing estimates to reference values derived from pride census data. More specifically, we attempted to account for bias resulting from non-random camera placement at lion resting sites under isolated trees by comparing estimates derived from night versus day photographs, between dry and wet seasons, and between habitats that differ in their amount of tree cover. Overall, we recorded 169 and 163 independent photographic events of female lions from 7,608 and 12,137 camera trap days carried out in the dry season of 2010 and the wet season of 2011, respectively. Although all REM models considered over-estimated female lion density, models that considered only night-time events resulted in estimates that were much less biased relative to those based on all photographic events. We conclude that restricting REM estimation to periods and habitats in which animal movement is more likely to be random with respect to cameras can help reduce bias in estimates of density for female Serengeti lions. We highlight that accurate REM estimates will nonetheless be dependent on reliable measures of average speed of animal movement and camera detection zone dimensions. © 2015 The Authors. Journal of Wildlife Management published by Wiley Periodicals, Inc. on behalf of The Wildlife Society. PMID:26640297
Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?
Bandyopadhyay, M. Sudhir, Dass Chakraborty, A.
2015-04-08
To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (? 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.
Volcanic explosion clouds - Density, temperature, and particle content estimates from cloud motion
NASA Technical Reports Server (NTRS)
Wilson, L.; Self, S.
1980-01-01
Photographic records of 10 vulcanian eruption clouds produced during the 1978 eruption of Fuego Volcano in Guatemala have been analyzed to determine cloud velocity and acceleration at successive stages of expansion. Cloud motion is controlled by air drag (dominant during early, high-speed motion) and buoyancy (dominant during late motion when the cloud is convecting slowly). Cloud densities in the range 0.6 to 1.2 times that of the surrounding atmosphere were obtained by fitting equations of motion for two common cloud shapes (spheres and vertical cylinders) to the observed motions. Analysis of the heat budget of a cloud permits an estimate of cloud temperature and particle weight fraction to be made from the density. Model results suggest that clouds generally reached temperatures within 10 K of that of the surrounding air within 10 seconds of formation and that dense particle weight fractions were less than 2% by this time. The maximum sizes of dense particles supported by motion in the convecting clouds range from 140 to 1700 microns.
Evaluation of a brushing machine for estimating density of spider mites on grape leaves.
Macmillan, Craig D; Costello, Michael J
2015-12-01
Direct visual inspection and enumeration for estimating field population density of economically important arthropods, such as spider mites, provide more information than alternative methods, such as binomial sampling, but is laborious and time consuming. A brushing machine can reduce sampling time and perhaps improve accuracy. Although brushing technology has been investigated and recommended as a useful tool for researchers and integrated pest management practitioners, little work to demonstrate the validity of this technique has been performed since the 1950's. We investigated the brushing machine manufactured by Leedom Enterprises (Mi-Wuk Village, CA, USA) for studies on spider mites. We evaluated (1) the mite recovery efficiency relative to the number of passes of a leaf through the brushes, (2) mite counts as generated by the machine compared to visual counts under a microscope, (3) the lateral distribution of mites on the collection plate and (4) the accuracy and precision of a 10 % sub-sample using a double-transect counting grid. We found that about 95 % of mites on a leaf were recovered after five passes, and 99 % after nine passes, and mite counts from brushing were consistently higher than those from visual inspection. Lateral distribution of mites was not uniform, being highest in concentration at the center and lowest at the periphery. The 10 % double-transect pattern did not result in a significant correlation with the total plate count at low mite density, but accuracy and precision improved at medium and high density. We suggest that a more accurate and precise sample may be achieved using a modified pattern which concentrates on the center plus some of the adjacent area. PMID:26459377
Robust estimation of mammographic breast density: a patient-based approach
NASA Astrophysics Data System (ADS)
Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas
2012-02-01
Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).
Fleetwood, D.M.; Shaneyfelt, M.R.; Schwank, J.R. )
1994-04-11
A simple method is described that combines conventional threshold-voltage and charge-pumping measurements on [ital n]- and [ital p]-channel metal-oxide-semiconductor (MOS) transistors to estimate radiation-induced oxide-, interface-, and border-trap charge densities. In some devices, densities of border traps (near-interfacial oxide traps that exchange charge with the underlying Si) approach or exceed the density of interface traps, emphasizing the need to distinguish border-trap contributions to MOS radiation response and long-term reliability from interface-trap contributions. Estimates of border-trap charge densities obtained via this new dual-transistor technique agree well with trap densities inferred from 1/[ital f] noise measurements for transistors with varying channel length.
Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study
NASA Technical Reports Server (NTRS)
Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.
2010-01-01
This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.
Dunn, K. L.; Wilson, P. P. H.
2013-07-01
A new Monte Carlo mesh tally based on a Kernel Density Estimator (KDE) approach using integrated particle tracks is presented. We first derive the KDE integral-track estimator and present a brief overview of its implementation as an alternative to the MCNP fmesh tally. To facilitate a valid quantitative comparison between these two tallies for verification purposes, there are two key issues that must be addressed. The first of these issues involves selecting a good data transfer method to convert the nodal-based KDE results into their cell-averaged equivalents (or vice versa with the cell-averaged MCNP results). The second involves choosing an appropriate resolution of the mesh, since if it is too coarse this can introduce significant errors into the reference MCNP solution. After discussing both of these issues in some detail, we present the results of a convergence analysis that shows the KDE integral-track and MCNP fmesh tallies are indeed capable of producing equivalent results for some simple 3D transport problems. In all cases considered, there was clear convergence from the KDE results to the reference MCNP results as the number of particle histories was increased. (authors)
Simultaneous Estimation of Depth, Density, and Water Equivalent of Snow using a Mobile GPR Setup
NASA Astrophysics Data System (ADS)
Jonas, T.; Griessinger, N.; Gindraux, S.
2014-12-01
Terrestrial and airborne laser scanning of snow has significantly increased our ability to improve our understanding of the spatial variability of snow depth. However, methods to provide corresponding datasets of snow water equivalent of similar quality are unavailable to date. Similar to laser scan technology, ground penetration radar (GPR) has become more accessible to snow researchers and is currently successfully used in the context of snow hydrological studies. GPR systems can be used and set up in different ways to measure snow properties. In this study we elaborate on a mobile GPR system that allows simultaneous estimation of snow depth, density, water equivalent in a snow survey setting. For this purpose we have built a GPR platform around a sledge system with four antenna pairs set up as a common-mid-point array and a separate fifth antenna pair dedicated to analyze the frequency change of the radar signal when propagating through the snowpack. Liquid water content can be accounted for by assessing the frequency dependent attenuation of the radar signal. We will present data from field campaigns that were carried out in 2013 and 2014 to test the ability of our GPR system to estimate snow bulk properties along several test transects. Along with the results, we will discuss system configuration and post-processing issues.
New Estimates on the EKB Dust Density using the Student Dust Counter
NASA Astrophysics Data System (ADS)
Szalay, J.; Horanyi, M.; Poppe, A. R.
2013-12-01
The Student Dust Counter (SDC) is an impact dust detector on board the New Horizons Mission to Pluto. SDC was designed to resolve the mass of dust grains in the range of 10^-12 < m < 10^-9 g, covering an approximate size range of 0.5-10 um in particle radius. The measurements can be directly compared to the prediction of a grain tracing trajectory model of dust originating from the Edgeworth-Kuiper Belt. SDC's results as well as data taken by the Pioneer 10 dust detector are compared to our model to derive estimates for the mass production rate and the ejecta mass distribution power law exponent. Contrary to previous studies, the assumption that all impacts are generated by grains on circular Keplerian orbits is removed, allowing for a more accurate determination of the EKB mass production rate. With these estimates, the speed and mass distribution of EKB grains entering atmospheres of outer solar system bodies can be calculated. Through December 2013, the New Horizons spacecraft reached approximately 28 AU, enabling SDC to map the dust density distribution of the solar system farther than any previous dust detector.
Rainfall-runoff modeling using conceptual, data driven, and wavelet based computing approach
NASA Astrophysics Data System (ADS)
Nayak, P. C.; Venkatesh, B.; Krishna, B.; Jain, Sharad K.
2013-06-01
The current study demonstrates the potential use of wavelet neural network (WNN) for river flow modeling by developing a rainfall-runoff model for Malaprabha basin in India. Daily data of rainfall, discharge, and evaporation for 21 years (from 1980 to 2000) have been used for modeling. In the modeling original model, inputs have been decomposed by wavelets and decomposed sub-series were taken as input to ANN. Model parameters are calibrated using 17 years of data and rest of the data are used for model validation. Statistical approach has been used to find out the model input. Optimum architectures of the WNN models are selected according to the obtained evaluation criteria in terms of Nash-Sutcliffe efficiency coefficient and root mean squared error. Result of this study has been compared by developing standard neural network model and NAM model. The results of this study indicate that the WNN model performs better compared to an ANN and NAM model in estimating the hydrograph characteristics such as flow duration curve effectively.
On L p -Resolvent Estimates and the Density of Eigenvalues for Compact Riemannian Manifolds
NASA Astrophysics Data System (ADS)
Bourgain, Jean; Shao, Peng; Sogge, Christopher D.; Yao, Xiaohua
2015-02-01
We address an interesting question raised by Dos Santos Ferreira, Kenig and Salo (Forum Math, 2014) about regions for which there can be uniform resolvent estimates for , , where is the Laplace-Beltrami operator with metric g on a given compact boundaryless Riemannian manifold of dimension . This is related to earlier work of Kenig, Ruiz and the third author (Duke Math J 55:329-347, 1987) for the Euclidean Laplacian, in which case the region is the entire complex plane minus any disc centered at the origin. Presently, we show that for the round metric on the sphere, S n , the resolvent estimates in (Dos Santos Ferreira et al. in Forum Math, 2014), involving a much smaller region, are essentially optimal. We do this by establishing sharp bounds based on the distance from to the spectrum of . In the other direction, we also show that the bounds in (Dos Santos Ferreira et al. in Forum Math, 2014) can be sharpened logarithmically for manifolds with nonpositive curvature, and by powers in the case of the torus, , with the flat metric. The latter improves earlier bounds of Shen (Int Math Res Not 1:1-31, 2001). The work of (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001) was based on Hadamard parametrices for . Ours is based on the related Hadamard parametrices for , and it follows ideas in (Sogge in Ann Math 126:439-447, 1987) of proving L p -multiplier estimates using small-time wave equation parametrices and the spectral projection estimates from (Sogge in J Funct Anal 77:123-138, 1988). This approach allows us to adapt arguments in Bérard (Math Z 155:249-276, 1977) and Hlawka (Monatsh Math 54:1-36, 1950) to obtain the aforementioned improvements over (Dos Santos Ferreira et al. in Forum Math, 2014) and (Shen in Int Math Res Not 1:1-31, 2001). Further improvements for the torus are obtained using recent techniques of the first author (Bourgain in Israel J Math 193(1):441-458, 2013) and his work with Guth (Bourgain and Guth in Geom Funct Anal 21:1239-1295, 2011) based on the multilinear estimates of Bennett, Carbery and Tao (Math Z 2:261-302, 2006). Our approach also allows us to give a natural necessary condition for favorable resolvent estimates that is based on a measurement of the density of the spectrum of , and, moreover, a necessary and sufficient condition based on natural improved spectral projection estimates for shrinking intervals, as opposed to those in (Sogge in J Funct Anal 77:123-138, 1988) for unit-length intervals. We show that the resolvent estimates are sensitive to clustering within the spectrum, which is not surprising given Sommerfeld's original conjecture (Sommerfeld in Physikal Zeitschr 11:1057-1066, 1910) about these operators.
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie
2015-09-01
Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems. PMID:25953609
Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.
2002-01-01
The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.
X-Ray Methods to Estimate Breast Density Content in Breast Tissue
NASA Astrophysics Data System (ADS)
Maraghechi, Borna
This work focuses on analyzing x-ray methods to estimate the fat and fibroglandular contents in breast biopsies and in breasts. The knowledge of fat in the biopsies could aid in their wide-angle x-ray scatter analyses. A higher mammographic density (fibrous content) in breasts is an indicator of higher cancer risk. Simulations for 5 mm thick breast biopsies composed of fibrous, cancer, and fat and for 4.2 cm thick breast fat/fibrous phantoms were done. Data from experimental studies using plastic biopsies were analyzed. The 5 mm diameter 5 mm thick plastic samples consisted of layers of polycarbonate (lexan), polymethyl methacrylate (PMMA-lucite) and polyethylene (polyet). In terms of the total linear attenuation coefficients, lexan ? fibrous, lucite ? cancer and polyet ? fat. The detectors were of two types, photon counting (CdTe) and energy integrating (CCD). For biopsies, three photon counting methods were performed to estimate the fat (polyet) using simulation and experimental data, respectively. The two basis function method that assumed the biopsies were composed of two materials, fat and a 50:50 mixture of fibrous (lexan) and cancer (lucite) appears to be the most promising method. Discrepancies were observed between the results obtained via simulation and experiment. Potential causes are the spectrum and the attenuation coefficient values used for simulations. An energy integrating method was compared to the two basis function method using experimental and simulation data. A slight advantage was observed for photon counting whereas both detectors gave similar results for the 4.2 cm thick breast phantom simulations. The percentage of fibrous within a 9 cm diameter circular phantom of fibrous/fat tissue was estimated via a fan beam geometry simulation. Both methods yielded good results. Computed tomography (CT) images of the circular phantom were obtained using both detector types. The radon transforms were estimated via four energy integrating techniques and one photon counting technique. Contrast, signal to noise ratio (SNR) and pixel values between different regions of interest were analyzed. The two basis function method and two of the energy integrating methods (calibration, beam hardening correction) gave the highest and more linear curves for contrast and SNR.
Siegwarth, J.D.; LaBrecque, J.F.; Roncier, M.; Philippe, R.; Saint-Just, J.
1982-12-16
Liquefied natural gas (LNG) densities can be measured directly but are usually determined indirectly in custody transfer measurement by using a density correlation based on temperature and composition measurements. An LNG densimeter test facility at the National Bureau of Standards uses an absolute densimeter based on the Archimedes principle, while a test facility at Gaz de France uses a correlation method based on measurement of composition and density. A comparison between these two test facilities using a portable version of the absolute densimeter provides an experimental estimate of the uncertainty of the indirect method of density measurement for the first time, on a large (32 L) sample. The two test facilities agree for pure methane to within about 0.02%. For the LNG-like mixtures consisting of methane, ethane, propane, and nitrogen with the methane concentrations always higher than 86%, the calculated density is within 0.25% of the directly measured density 95% of the time.
Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts
NASA Technical Reports Server (NTRS)
Grau, David
2012-01-01
This process is designed to estimate the thickness change of a material through data analysis of a digitized version of an x-ray (or a digital x-ray) containing the material (with the thickness in question) and various tooling. Using this process, it is possible to estimate a material's thickness change in a region of the material or part that is thinner than the rest of the reference thickness. However, that same principle process can be used to determine the thickness change of material using a thinner region to determine thickening, or it can be used to develop contour plots of an entire part. Proper tooling must be used. An x-ray film with an S-shaped characteristic curve or a digital x-ray device with a product resulting in like characteristics is necessary. If a film exists with linear characteristics, this type of film would be ideal; however, at the time of this reporting, no such film has been known. Machined components (with known fractional thicknesses) of a like material (similar density) to that of the material to be measured are necessary. The machined components should have machined through-holes. For ease of use and better accuracy, the throughholes should be a size larger than 0.125 in. (.3 mm). Standard components for this use are known as penetrameters or image quality indicators. Also needed is standard x-ray equipment, if film is used in place of digital equipment, or x-ray digitization equipment with proven conversion properties. Typical x-ray digitization equipment is commonly used in the medical industry, and creates digital images of x-rays in DICOM format. It is recommended to scan the image in a 16-bit format. However, 12-bit and 8-bit resolutions are acceptable. Finally, x-ray analysis software that allows accurate digital image density calculations, such as Image-J freeware, is needed. The actual procedure requires the test article to be placed on the raw x-ray, ensuring the region of interest is aligned for perpendicular x-ray exposure capture. One or multiple machined components of like material/ density with known thicknesses are placed atop the part (preferably in a region of nominal and non-varying thickness) such that exposure of the combined part and machined component lay-up is captured on the x-ray. Depending on the accuracy required, the machined component fs thickness must be carefully chosen. Similarly, depending on the accuracy required, the lay-up must be exposed such that the regions of the x-ray to be analyzed have a density range between 1 and 4.5. After the exposure, the image is digitized, and the digital image can then be analyzed using the image analysis software.
NASA Astrophysics Data System (ADS)
Chen, W.; Shao, Z.; Tiong, L. K.
2015-11-01
Drought caused the most widespread damage in China, making up over 50 % of the total affected area nationwide in recent decades. In the paper, a Standardized Precipitation Index-based (SPI-based) drought risk study is conducted using historical rainfall data of 19 weather stations in Shandong province, China. Kernel density based method is adopted to carry out the risk analysis. Comparison between the bivariate Gaussian kernel density estimation (GKDE) and diffusion kernel density estimation (DKDE) are carried out to analyze the effect of drought intensity and drought duration. The results show that DKDE is relatively more accurate without boundary-leakage. Combined with the GIS technique, the drought risk is presented which reveals the spatial and temporal variation of agricultural droughts for corn in Shandong. The estimation provides a different way to study the occurrence frequency and severity of drought risk from multiple perspectives.
Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites
Mitchard, Edward T A; Feldpausch, Ted R; Brienen, Roel J W; Lopez-Gonzalez, Gabriela; Monteagudo, Abel; Baker, Timothy R; Lewis, Simon L; Lloyd, Jon; Quesada, Carlos A; Gloor, Manuel; ter Steege, Hans; Meir, Patrick; Alvarez, Esteban; Araujo-Murakami, Alejandro; Aragão, Luiz E O C; Arroyo, Luzmila; Aymard, Gerardo; Banki, Olaf; Bonal, Damien; Brown, Sandra; Brown, Foster I; Cerón, Carlos E; Chama Moscoso, Victor; Chave, Jerome; Comiskey, James A; Cornejo, Fernando; Corrales Medina, Massiel; Da Costa, Lola; Costa, Flavia R C; Di Fiore, Anthony; Domingues, Tomas F; Erwin, Terry L; Frederickson, Todd; Higuchi, Niro; Honorio Coronado, Euridice N; Killeen, Tim J; Laurance, William F; Levis, Carolina; Magnusson, William E; Marimon, Beatriz S; Marimon Junior, Ben Hur; Mendoza Polo, Irina; Mishra, Piyush; Nascimento, Marcelo T; Neill, David; Núñez Vargas, Mario P; Palacios, Walter A; Parada, Alexander; Pardo Molina, Guido; Peña-Claros, Marielos; Pitman, Nigel; Peres, Carlos A; Poorter, Lourens; Prieto, Adriana; Ramirez-Angulo, Hirma; Restrepo Correa, Zorayda; Roopsind, Anand; Roucoux, Katherine H; Rudas, Agustin; Salomão, Rafael P; Schietti, Juliana; Silveira, Marcos; de Souza, Priscila F; Steininger, Marc K; Stropp, Juliana; Terborgh, John; Thomas, Raquel; Toledo, Marisol; Torres-Lezama, Armando; van Andel, Tinde R; van der Heijden, Geertje M F; Vieira, Ima C G; Vieira, Simone; Vilanova-Torre, Emilio; Vos, Vincent A; Wang, Ophelia; Zartman, Charles E; Malhi, Yadvinder; Phillips, Oliver L
2014-01-01
Aim The accurate mapping of forest carbon stocks is essential for understanding the global carbon cycle, for assessing emissions from deforestation, and for rational land-use planning. Remote sensing (RS) is currently the key tool for this purpose, but RS does not estimate vegetation biomass directly, and thus may miss significant spatial variations in forest structure. We test the stated accuracy of pantropical carbon maps using a large independent field dataset. Location Tropical forests of the Amazon basin. The permanent archive of the field plot data can be accessed at: http://dx.doi.org/10.5521/FORESTPLOTS.NET/2014_1 Methods Two recent pantropical RS maps of vegetation carbon are compared to a unique ground-plot dataset, involving tree measurements in 413 large inventory plots located in nine countries. The RS maps were compared directly to field plots, and kriging of the field data was used to allow area-based comparisons. Results The two RS carbon maps fail to capture the main gradient in Amazon forest carbon detected using 413 ground plots, from the densely wooded tall forests of the north-east, to the light-wooded, shorter forests of the south-west. The differences between plots and RS maps far exceed the uncertainties given in these studies, with whole regions over- or under-estimated by >?25%, whereas regional uncertainties for the maps were reported to be density and allometry to create maps suitable for carbon accounting. The use of single relationships between tree canopy height and above-ground biomass inevitably yields large, spatially correlated errors. This presents a significant challenge to both the forest conservation and remote sensing communities, because neither wood density nor species assemblages can be reliably mapped from space. PMID:26430387
How massive is Saturn's B ring? Clues from cryptic density waves
NASA Astrophysics Data System (ADS)
Hedman, Matthew M.; Nicholson, Philip D.
2015-05-01
The B ring is the brightest and most opaque of Saturn's rings, but it is also amongst the least well understood because basic parameters like its surface mass density are still poorly constrained. Elsewhere in the rings, spiral density waves driven by resonances with Saturn's various moons provide precise and robust mass density estimates, but for most the B ring extremely high opacities and strong stochastic optical depth variations obscure the signal from these wave patterns. We have developed a new wavelet-based technique that combines data from multiple stellar occultations (observed by the Visual and Infrared Mapping Spectrometer (VIMS) instrument onboard the Cassini spacecraft) that has allowed us to identify signals that may be due to waves generated by three of the strongest resonances in the central and outer B ring. These wave signatures yield new estimates of the B-ring's mass density and indicate that the B-ring's total mass could be quite low, perhaps a fraction of the mass of Saturn's moon Mimas.
Trolle, M.; Kery, M.
2003-01-01
Neotropical felids such as the ocelot (Leopardus pardalis) are secretive, and it is difficult to estimate their populations using conventional methods such as radiotelemetry or sign surveys. We show that recognition of individual ocelots from camera-trapping photographs is possible, and we use camera-trapping results combined with closed population capture-recapture models to estimate density of ocelots in the Brazilian Pantanal. We estimated the area from which animals were camera trapped at 17.71 km2. A model with constant capture probability yielded an estimate of 10 independent ocelots in our study area, which translates to a density of 2.82 independent individuals for every 5 km2 (SE 1.00).
Karanth, K.U.; Chundawat, R.S.; Nichols, J.D.; Kumar, N.S.
2004-01-01
Tropical dry-deciduous forests comprise more than 45% of the tiger (Panthera tigris) habitat in India. However, in the absence of rigorously derived estimates of ecological densities of tigers in dry forests, critical baseline data for managing tiger populations are lacking. In this study tiger densities were estimated using photographic capture?recapture sampling in the dry forests of Panna Tiger Reserve in Central India. Over a 45-day survey period, 60 camera trap sites were sampled in a well-protected part of the 542-km2 reserve during 2002. A total sampling effort of 914 camera-trap-days yielded photo-captures of 11 individual tigers over 15 sampling occasions that effectively covered a 418-km2 area. The closed capture?recapture model Mh, which incorporates individual heterogeneity in capture probabilities, fitted these photographic capture history data well. The estimated capture probability/sample, 0.04, resulted in an estimated tiger population size and standard error of 29 (9.65), and a density of 6.94 (3.23) tigers/100 km2. The estimated tiger density matched predictions based on prey abundance. Our results suggest that, if managed appropriately, the available dry forest habitat in India has the potential to support a population size of about 9000 wild tigers.
Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation
Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.
2011-05-15
Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.
NASA Astrophysics Data System (ADS)
Liu, Z.; Lundgren, P.; Rosen, P. A.; Agram, P.
2013-12-01
Accurate imaging of deformation processes in plate boundary zones at various space-time scales is crucial to advancing our knowledge of plate boundary tectonics and volcano dynamics. Space-borne geodetic measurements such as interferometric synthetic aperture radar (InSAR) and continuous GPS (CGPS) provide complementary measurements of surface deformation. InSAR provides the line-of-sight measurements that are spatially dense but temporally coarse while point-based GPS measurements provide 3-D displacement components at sub-daily to daily temporal interval but are limited when trying to resolve fine-scale deformation processes depending on station distribution and spacing. The large volume of SAR data from existing satellite platforms and future SAR missions and GPS time series from large-scale CGPS networks (e.g, Earthscope/PBO) call for efficient approaches to integrate these two data for maximal extraction of the signal of interest and imaging time-variable deformation processes. We present a wavelet based spatiotemporal filtering approach to integrate InSAR and GPS data at multi-scale level in space and time. The approach consists of a series of InSAR noise correction modules that are based on wavelet multi-resolution analysis (MRA) for correcting major noise components in InSAR images and the InSAR time series analysis that combines MRA and small baseline least-squares inversion with temporal filtering (wavelet or Kalman filter based) to filter out turbulent troposphere noise. It also exploits a novel way that considers temporal correlation between InSAR and GPS time series at a multi-scale level and reconstruct surface deformation measurements in dense spatial and temporal sampling. Compared to other approaches, this approach does not require a priori parameterization of temporal behaviors and provides a general way to discover signals of interest at different spatiotemporal scales. We present test cases where known signals with realistic noise components are synthesized for analysis and comparison. We are in the process of improving the approach and generalizing it to real-world applications.
Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.
2005-01-01
The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important, such as fire prone forests.
Lifshitz tails estimate for the density of states of the Anderson model
Jean-Michel Combes; Fran\\c cois Germinet; Abel Klein
2010-08-27
We prove an upper bound for the (differentiated) density of states of the Anderson model at the bottom of the spectrum. The density of states is shown to exhibit the same Lifshitz tails upper bound as the integrated density of states.
Willett, Rebecca
Intensity and Density Estimation Rebecca M. Willett, Member, IEEE, and Robert D. Nowak, Senior Member, IEEE. Willett is with the Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708 USA (e-mail: willett@duke.edu). R. D. Nowak is with the Department of Electrical and Computer Engi
Estimation of refractive index and density of lubricants under high pressure by Brillouin scattering
NASA Astrophysics Data System (ADS)
Nakamura, Y.; Fujishiro, I.; Kawakami, H.
1994-07-01
Employing a diamond-anvil cell, Brillouin scattering spectra of 90° and 180° angles for synthetic lubricants (paraffinic and naphthenic oils) were measured and sound velocity, density, and refractive index under high pressure were obtained. The density obtained from the thermodynamic relation was compared with that from Lorentz-Lorentz's formula. The density was also compared with Dowson's density-pressure equation of lubricants, and density-pressure characteristics of the paraffinic oil and naphthenic oil were described considering the molecular structure for solidified lubricants. The effect of such physical properties of lubricants on the elastohydrodynamic lubrication of ball bearings, gears and traction drives was considered.
Avetisov, K S; Markosian, A G
2013-01-01
Results of combined ultrasound scanning for estimation of acoustic lens density and biometric relations of lens and other eye structures are presented. A group of 124 patients (189 eyes) was studied; they were subdivided depending on age and length of anteroposterior axis of the eye. Examination algorithm was developed that allows selective estimation of acoustic density of different lens zones and biometric measurements including volumetric. Age-related increase of acoustic density of different lens zones was revealed that indirectly shows method efficiency. Biometric studies showed almost concurring volumetric lens measurements in "normal" and "short" eyes in spite of significantly thicker central zone of the latter. Significantly lower correlation between anterior chamber volume and width of its angle was revealed in "short" eyes and "normal" and "long" eyes (correlation coefficients 0.37, 0.68 and 0.63 respectively). PMID:23879017
NASA Astrophysics Data System (ADS)
Joglekar, D. M.; Mitra, M.
2015-12-01
The present investigation outlines a method based on the wavelet transform to analyze the vibration response of discrete piecewise linear oscillators, representative of beams with breathing cracks. The displacement and force variables in the governing differential equation are approximated using Daubechies compactly supported wavelets. An iterative scheme is developed to arrive at the optimum transform coefficients, which are back-transformed to obtain the time-domain response. A time-integration scheme, solving a linear complementarity problem at every time step, is devised to validate the proposed wavelet-based method. Applicability of the proposed solution technique is demonstrated by considering several test cases involving a cracked cantilever beam modeled as a bilinear SDOF system subjected to a harmonic excitation. In particular, the presence of higher-order harmonics, originating from the piecewise linear behavior, is confirmed in all the test cases. Parametric study involving the variations in the crack depth, and crack location is performed to bring out their effect on the relative strengths of higher-order harmonics. Versatility of the method is demonstrated by considering the cases such as mixed-frequency excitation and an MDOF oscillator with multiple bilinear springs. In addition to purporting the wavelet-based method as a viable alternative to analyze the response of piecewise linear oscillators, the proposed method can be easily extended to solve inverse problems unlike the other direct time integration schemes.
Alves, Carolina Moura; Horodecki, Pawel; Oi, Daniel K. L.; Kwek, L. C.; Ekert, Artur K.
2003-09-01
We present a method of direct estimation of important properties of a shared bipartite quantum state, within the ''distant laboratories'' paradigm, using only local operations and classical communication. We apply this procedure to spectrum estimation of shared states, and locally implementable structural physical approximations to incompletely positive maps. This procedure can also be applied to the estimation of channel capacity and measures of entanglement.
NASA Technical Reports Server (NTRS)
Jergas, M.; Breitenseher, M.; Gluer, C. C.; Yu, W.; Genant, H. K.
1995-01-01
To determine whether estimates of volumetric bone density from projectional scans of the lumbar spine have weaker associations with height and weight and stronger associations with prevalent vertebral fractures than standard projectional bone mineral density (BMD) and bone mineral content (BMC), we obtained posteroanterior (PA) dual X-ray absorptiometry (DXA), lateral supine DXA (Hologic QDR 2000), and quantitative computed tomography (QCT, GE 9800 scanner) in 260 postmenopausal women enrolled in two trials of treatment for osteoporosis. In 223 women, all vertebral levels, i.e., L2-L4 in the DXA scan and L1-L3 in the QCT scan, could be evaluated. Fifty-five women were diagnosed as having at least one mild fracture (age 67.9 +/- 6.5 years) and 168 women did not have any fractures (age 62.3 +/- 6.9 years). We derived three estimates of "volumetric bone density" from PA DXA (BMAD, BMAD*, and BMD*) and three from paired PA and lateral DXA (WA BMD, WA BMDHol, and eVBMD). While PA BMC and PA BMD were significantly correlated with height (r = 0.49 and r = 0.28) or weight (r = 0.38 and r = 0.37), QCT and the volumetric bone density estimates from paired PA and lateral scans were not (r = -0.083 to r = 0.050). BMAD, BMAD*, and BMD* correlated with weight but not height. The associations with vertebral fracture were stronger for QCT (odds ratio [QR] = 3.17; 95% confidence interval [CI] = 1.90-5.27), eVBMD (OR = 2.87; CI 1.80-4.57), WA BMDHol (OR = 2.86; CI 1.80-4.55) and WA-BMD (OR = 2.77; CI 1.75-4.39) than for BMAD*/BMD* (OR = 2.03; CI 1.32-3.12), BMAD (OR = 1.68; CI 1.14-2.48), lateral BMD (OR = 1.88; CI 1.28-2.77), standard PA BMD (OR = 1.47; CI 1.02-2.13) or PA BMC (OR = 1.22; CI 0.86-1.74). The areas under the receiver operating characteristic (ROC) curves for QCT and all estimates of volumetric BMD were significantly higher compared with standard PA BMD and PA BMC. We conclude that, like QCT, estimates of volumetric bone density from paired PA and lateral scans are unaffected by height and weight and are more strongly associated with vertebral fracture than standard PA BMD or BMC, or estimates of volumetric density that are solely based on PA DXA scans.
A field comparison of nested grid and trapping web density estimators
Jett, D.A.; Nichols, J.D.
1987-01-01
The usefulness of capture-recapture estimators in any field study will depend largely on underlying model assumptions and on how closely these assumptions approximate the actual field situation. Evaluation of estimator performance under real-world field conditions is often a difficult matter, although several approaches are possible. Perhaps the best approach involves use of the estimation method on a population with known parameters.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.
2011-12-01
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
Estimation of the density of Martian soil from radiophysical measurements in the 3-centimeter range
NASA Technical Reports Server (NTRS)
Krupenio, N. N.
1977-01-01
The density of the Martian soil is evaluated at a depth up to one meter using the results of radar measurement at lambda sub 0 = 3.8 cm and polarized radio astronomical measurement at lambda sub 0 = 3.4 cm conducted onboard the automatic interplanetary stations Mars 3 and Mars 5. The average value of the soil density according to all measurements is rho bar = 1.37 plus or minus 0.33 g/ cu cm. A map of the distribution of the permittivity and soil density is derived, which was drawn up according to radiophysical data in the 3 centimeter range.
Estimating cetacean density from passive acoustic arrays Tiago A. Marques and Len Thomas
Marques, Tiago A.
References Barlow, J. and Taylor, B. 2005. Estimates of sperm whale abundance in the Northeastern Temperate, G. D., Swift, R. J., Gordon, J. C., Slesser, G., and Turrell, W. R. 2003. Sperm whale distribution, A. 2007. Sperm whale abundance estimates from acoustic surveys of the Ionian Sea and Straits
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
2009-03-05
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
Lapuerta, Magín; Rodríguez-Fernández, José; Armas, Octavio
2010-09-01
Biodiesel fuels (methyl or ethyl esters derived from vegetables oils and animal fats) are currently being used as a means to diminish the crude oil dependency and to limit the greenhouse gas emissions of the transportation sector. However, their physical properties are different from traditional fossil fuels, this making uncertain their effect on new, electronically controlled vehicles. Density is one of those properties, and its implications go even further. First, because governments are expected to boost the use of high-biodiesel content blends, but biodiesel fuels are denser than fossil ones. In consequence, their blending proportion is indirectly restricted in order not to exceed the maximum density limit established in fuel quality standards. Second, because an accurate knowledge of biodiesel density permits the estimation of other properties such as the Cetane Number, whose direct measurement is complex and presents low repeatability and low reproducibility. In this study we compile densities of methyl and ethyl esters published in literature, and proposed equations to convert them to 15 degrees C and to predict the biodiesel density based on its chain length and unsaturation degree. Both expressions were validated for a wide range of commercial biodiesel fuels. Using the latter, we define a term called Biodiesel Cetane Index, which predicts with high accuracy the Biodiesel Cetane Number. Finally, simple calculations prove that the introduction of high-biodiesel content blends in the fuel market would force the refineries to reduce the density of their fossil fuels. PMID:20599853
Comparison of Precision Orbit Derived Density Estimates for CHAMP and GRACE Satellites
Fattig, Eric
2011-04-21
. This comparison takes two forms, cross correlation analysis and root mean square analysis. The densities obtained from the POE method are nearly always superior to the empirical models, both in matching the trends observed by the accelerometer (cross correlation...
Rivera-Milan, F. F.; Collazo, J.A.; Stahala, C.; Moore, W.J.; Davis, A.; Herring, G.; Steinkamp, M.; Pagliaro, R.; Thompson, J.L.; Bracey, W.
2005-01-01
Once abundant and widely distributed, the Bahama parrot (Amazona leucocephala bahamensis) currently inhabits only the Great Abaco and Great lnagua Islands of the Bahamas. In January 2003 and May 2002-2004, we conducted point-transect surveys (a type of distance sampling) to estimate density and population size and make recommendations for monitoring trends. Density ranged from 0.061 (SE = 0.013) to 0.085 (SE = 0.018) parrots/ha and population size ranged from 1,600 (SE = 354) to 2,386 (SE = 508) parrots when extrapolated to the 26,154 ha and 28,162 ha covered by surveys on Abaco in May 2002 and 2003, respectively. Density was 0.183 (SE = 0.049) and 0.153 (SE = 0.042) parrots/ha and population size was 5,344 (SE = 1,431) and 4,450 (SE = 1,435) parrots when extrapolated to the 29,174 ha covered by surveys on Inagua in May 2003 and 2004, respectively. Because parrot distribution was clumped, we would need to survey 213-882 points on Abaco and 258-1,659 points on Inagua to obtain a CV of 10-20% for estimated density. Cluster size and its variability and clumping increased in wintertime, making surveys imprecise and cost-ineffective. Surveys were reasonably precise and cost-effective in springtime, and we recommend conducting them when parrots are pairing and selecting nesting sites. Survey data should be collected yearly as part of an integrated monitoring strategy to estimate density and other key demographic parameters and improve our understanding of the ecological dynamics of these geographically isolated parrot populations at risk of extinction.
Rayan, D Mark; Mohamad, Shariff Wan; Dorward, Leejiah; Aziz, Sheema Abdul; Clements, Gopalasamy Reuben; Christopher, Wong Chai Thiam; Traeholt, Carl; Magintan, David
2012-12-01
The endangered Asian tapir (Tapirus indicus) is threatened by large-scale habitat loss, forest fragmentation and increased hunting pressure. Conservation planning for this species, however, is hampered by a severe paucity of information on its ecology and population status. We present the first Asian tapir population density estimate from a camera trapping study targeting tigers in a selectively logged forest within Peninsular Malaysia using a spatially explicit capture-recapture maximum likelihood based framework. With a trap effort of 2496 nights, 17 individuals were identified corresponding to a density (standard error) estimate of 9.49 (2.55) adult tapirs/100 km(2) . Although our results include several caveats, we believe that our density estimate still serves as an important baseline to facilitate the monitoring of tapir population trends in Peninsular Malaysia. Our study also highlights the potential of extracting vital ecological and population information for other cryptic individually identifiable animals from tiger-centric studies, especially with the use of a spatially explicit capture-recapture maximum likelihood based framework. PMID:23253368
Thomas, Len
on detecting the sounds cetaceans make underwater, using fixed hydrophones. There are many potentialDECAF Density Estimation for Cetaceans from passive Acoustic Fixed sensors Len Thomas CREEM methods for estimating the density of cetacean species from fixed passive acoustic devices. Methods should
Variability of footprint ridge density and its use in estimation of sex in forensic examinations.
Krishan, Kewal; Kanchan, Tanuj; Pathania, Annu; Sharma, Ruchika; DiMaggio, John A
2015-10-01
The present study deals with a comparatively new biometric parameter of footprints called footprint ridge density. The study attempts to evaluate sex-dependent variations in ridge density in different areas of the footprint and its usefulness in discriminating sex in the young adult population of north India. The sample for the study consisted of 160 young adults (121 females) from north India. The left and right footprints were taken from each subject according to the standard procedures. The footprints were analysed using a 5?mm?×?5?mm square and the ridge density was calculated in four different well-defined areas of the footprints. These were: F1 - the great toe on its proximal and medial side; F2 - the medial ball of the footprint, below the triradius (the triradius is a Y-shaped group of ridges on finger balls, palms and soles which forms the basis of ridge counting in identification); F3 - the lateral ball of the footprint, towards the most lateral part; and F4 - the heel in its central part where the maximum breadth at heel is cut by a perpendicular line drawn from the most posterior point on heel. This value represents the number of ridges in a 25?mm(2) area and reflects the ridge density value. Ridge densities analysed on different areas of footprints were compared with each other using the Friedman test for related samples. The total footprint ridge density was calculated as the sum of the ridge density in the four areas of footprints included in the study (F1?+?F2?+?F3?+?F4). The results show that the mean footprint ridge density was higher in females than males in all the designated areas of the footprints. The sex differences in footprint ridge density were observed to be statistically significant in the analysed areas of the footprint, except for the heel region of the left footprint. The total footprint ridge density was also observed to be significantly higher among females than males. A statistically significant correlation is shown in the ridge densities among most areas of both left and right sides. Based on receiver operating characteristic (ROC) curve analysis, the sexing potential of footprint ridge density was observed to be considerably higher on the right side. The sexing potential for the four areas ranged between 69.2% and 85.3% on the right side, and between 59.2% and 69.6% on the left side. ROC analysis of the total footprint ridge density shows that the sexing potential of the right and left footprint was 91.5% and 77.7% respectively. The study concludes that footprint ridge density can be utilised in the determination of sex as a supportive parameter. The findings of the study should be utilised only on the north Indian population and may not be internationally generalisable. PMID:25413487
Modeled Salt Density for Nuclear Material Estimation in the Treatment of Spent Nuclear Fuel
DeeEarl Vaden; Robert. D. Mariani
2010-09-01
Spent metallic nuclear fuel is being treated in a pyrometallurgical process that includes electrorefining the uranium metal in molten eutectic LiCl-KCl as the supporting electrolyte. We report a model for determining the density of the molten salt. Inventory operations account for the net mass of salt and for the mass of actinides present. It was necessary to know the molten salt density but difficult to measure, and it was decided to model the salt density for the initial treatment operations. The model assumes that volumes are additive for the ideal molten salt solution as a starting point; subsequently a correction factor for the lanthanides and actinides was developed. After applying the correction factor, the percent difference between the net salt mass in the electrorefiner and the resulting modeled salt mass decreased from more than 4.0% to approximately 0.1%. As a result, there is no need to measure the salt density at 500 C for inventory operations; the model for the salt density is found to be accurate.
NASA Astrophysics Data System (ADS)
Shangguan, Pengcheng; Al-Qadi, Imad L.; Lahouar, Samer
2014-08-01
This paper presents the application of artificial neural network (ANN) based pattern recognition to extract the density information of asphalt pavement from simulated ground penetrating radar (GPR) signals. This study is part of research efforts into the application of GPR to monitor asphalt pavement density during compaction. The main challenge is to eliminate the effect of roller-sprayed water on GPR signals during compaction and to extract density information accurately. A calibration of the excitation function was conducted to provide an accurate match between the simulated signal and the real signal. A modified electromagnetic mixing model was then used to calculate the dielectric constant of asphalt mixture with water. A large database of GPR responses was generated from pavement models having different air void contents and various surface moisture contents using finite-difference time-domain simulation. Feature extraction was performed to extract density-related features from the simulated GPR responses. Air void contents were divided into five classes representing different compaction statuses. An ANN-based pattern recognition system was trained using the extracted features as inputs and air void content classes as target outputs. Accuracy of the system was tested using test data set. Classification of air void contents using the developed algorithm is found to be highly accurate, which indicates effectiveness of this method to predict asphalt concrete density.
Dafflon, Baptisite; Barrash, Warren; Cardiff, Michael A.; Johnson, Timothy C.
2011-12-15
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variabledensity transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
Dynamics of photosynthetic photon flux density (PPFD) and estimates in coastal northern California
Technology Transfer Automated Retrieval System (TEKTRAN)
The seasonal trends and diurnal patterns of Photosynthetically Active Radiation (PAR) were investigated in the San Francisco Bay Area of Northern California from March through August in 2007 and 2008. During these periods, the daily values of PAR flux density (PFD), energy loading with PAR (PARE), a...
Kurimo, Mikko
WITH MIXTURE DENSITY HMMS Mikko Kurimo and Panu Somervuo Helsinki University of Technology, Neural Networks) algorithm. The advantage of using the SOM is based on the created approximative topology between the mixture. The topology makes the neighboring mixtures to respond strongly for the same inputs and so most of the nearest
Rosado-Mendez, Ivan M.; Nam, Kibo; Hall, Timothy J.; Zagzebski, James A.
2013-01-01
Reported here is a phantom-based comparison of methods for determining the power spectral density of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing ?(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law ?(f)=?0f?, was estimated using a reference phantom method. The power spectral density as estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter estimation region. Errors were quantified by the bias and standard deviation of the ?0 and ? estimates, and by the overall power-law fit error. For parameter estimation regions larger than ~34 pulse lengths (~1cm for this experiment), an overall power-law fit error of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the ?0 and ? estimates depended on the size of the parameter estimation region. Here the multitaper method reduced the standard deviation of the ?0 and ? estimates compared to those using the other techniques. Results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound. PMID:23858055
Technology Transfer Automated Retrieval System (TEKTRAN)
Resolving uncertainty in the carbon cycle is paramount to refining climate predictions. Soil organic carbon (SOC) is a major component of terrestrial C pools, and accuracy of SOC estimates are only as good as the measurements and assumptions used to obtain them. Dryland soils account for a substanti...
ON THE IMPOSSIBILITY OF ESTIMATING DENSITIES IN THE EXTREME TAIL Jan Beirlant
Devroye, Luc
is in the domain of attraction of an extreme value distribution. Also, de Haan and Resnick (1996) have studied. Maxima. Extreme values. The second author's work was supported by NSERC Grant A3456 and by FCAR Grant 90 to estimate extreme tail probabilities. All of these references do assume that the distribution of Yn
Rabinovich, J. E.; Gürtler, R. E.; Leal, J. A.; Feliciangeli, D.
1995-01-01
We reported the use of the timed manual method, routinely employed as an indicator to the relative abundance of domestic triatomine bugs, to estimate their absolute density in houses. A team of six people collected Rhodnius prolixus Stål bugs from the walls and roofs of 14 typical palm-leaf rural houses located in Cojedes, Venezuela, spending 40 minutes searching in each house. One day after these manual collections, all the houses were demolished and the number of triatomine bugs were identified by instar and counted. Linear regression analyses of the number of R. prolixus collected over 4 man-hours and the census counts obtained by house demolition indicated that the fit of the data by instar (stage II--adult) and place of capture (roof versus palm walls versus mud walls) was satisfactory. The slopes of the regressions were interpreted as a measure of "catchability" (probability of capture). Catchability increased with developmental stage (ranging from 11.2% in stage II to 38.7% in adults), probably reflecting the increasing size and visibility of bugs as they evolved. The catchability on palm wall was higher than that for roofs or mud walls, increasing form 1.3% and 3.0% in stage II to 13.4% and 14.0% in adults, respectively. We reported, also, regression equations for converting field estimates of timed manual collections of R. prolixus into absolute density estimates. Images Fig. 1 PMID:7614667
Comparison of volumetric breast density estimations from mammography and thorax CT
NASA Astrophysics Data System (ADS)
Geeraert, N.; Klausz, R.; Cockmartin, L.; Muller, S.; Bosmans, H.; Bloch, I.
2014-08-01
Breast density has become an important issue in current breast cancer screening, both as a recognized risk factor for breast cancer and by decreasing screening efficiency by the masking effect. Different qualitative and quantitative methods have been proposed to evaluate area-based breast density and volumetric breast density (VBD). We propose a validation method comparing the computation of VBD obtained from digital mammographic images (VBDMX) with the computation of VBD from thorax CT images (VBDCT). We computed VBDMX by applying a conversion function to the pixel values in the mammographic images, based on models determined from images of breast equivalent material. VBDCT is computed from the average Hounsfield Unit (HU) over the manually delineated breast volume in the CT images. This average HU is then compared to the HU of adipose and fibroglandular tissues from patient images. The VBDMX method was applied to 663 mammographic patient images taken on two Siemens Inspiration (hospL) and one GE Senographe Essential (hospJ). For the comparison study, we collected images from patients who had a thorax CT and a mammography screening exam within the same year. In total, thorax CT images corresponding to 40 breasts (hospL) and 47 breasts (hospJ) were retrieved. Averaged over the 663 mammographic images the median VBDMX was 14.7% . The density distribution and the inverse correlation between VBDMX and breast thickness were found as expected. The average difference between VBDMX and VBDCT is smaller for hospJ (4%) than for hospL (10%). This study shows the possibility to compare VBDMX with the VBD from thorax CT exams, without additional examinations. In spite of the limitations caused by poorly defined breast limits, the calibration of mammographic images to local VBD provides opportunities for further quantitative evaluations.
Breast segmentation and density estimation in breast MRI: a fully automatic framework.
Gubern-Mérida, Albert; Kallenberg, Michiel; Mann, Ritse M; Martí, Robert; Karssemeijer, Nico
2015-01-01
Breast density measurement is an important aspect in breast cancer diagnosis as dense tissue has been related to the risk of breast cancer development. The purpose of this study is to develop a method to automatically compute breast density in breast MRI. The framework is a combination of image processing techniques to segment breast and fibroglandular tissue. Intra- and interpatient signal intensity variability is initially corrected. The breast is segmented by automatically detecting body-breast and air-breast surfaces. Subsequently, fibroglandular tissue is segmented in the breast area using expectation-maximization. A dataset of 50 cases with manual segmentations was used for evaluation. Dice similarity coefficient (DSC), total overlap, false negative fraction (FNF), and false positive fraction (FPF) are used to report similarity between automatic and manual segmentations. For breast segmentation, the proposed approach obtained DSC, total overlap, FNF, and FPF values of 0.94, 0.96, 0.04, and 0.07, respectively. For fibroglandular tissue segmentation, we obtained DSC, total overlap, FNF, and FPF values of 0.80, 0.85, 0.15, and 0.22, respectively. The method is relevant for researchers investigating breast density as a risk factor for breast cancer and all the described steps can be also applied in computer aided diagnosis systems. PMID:25561456
Jamilis, Martín; Garelli, Fabricio; Mozumder, Md Salatul Islam; Castañeda, Teresita; De Battista, Hernán
2015-10-01
This paper addresses the estimation of the specific production rate of intracellular products and the modeling of the bioreactor volume dynamics in high cell density fed-batch reactors. In particular, a new model for the bioreactor volume is proposed, suitable to be used in high cell density cultures where large amounts of intracellular products are stored. Based on the proposed volume model, two forms of a high-order sliding mode observer are proposed. Each form corresponds to the cases with residual biomass concentration or volume measurement, respectively. The observers achieve finite time convergence and robustness to process uncertainties as the kinetic model is not required. Stability proofs for the proposed observer are given. The observer algorithm is assessed numerically and experimentally. PMID:26149912
Separable Measurement Estimation of Density Matrices and its Fidelity Gap with Collective Protocols
E. Bagan; M. A. Ballester; R. D. Gill; R. Munoz-Tapia; O. Romero-Isart
2006-09-25
We show that there exists a gap between the performance of separable and collective measurements in qubit mixed-state estimation that persists in the large sample limit. We characterize such gap in terms of the corresponding bounds on the mean fidelity. We present an adaptive protocol that attains the separable-measurement bound. This (optimal separable) protocol uses von Neumann measurements and can be easily implemented with current technology.
Linkage Disequilibrium Estimation of Chinese Beef Simmental Cattle Using High-density SNP Panels
Zhu, M.; Zhu, B.; Wang, Y. H.; Wu, Y.; Xu, L.; Guo, L. P.; Yuan, Z. R.; Zhang, L. P.; Gao, X.; Gao, H. J.; Xu, S. Z.; Li, J. Y.
2013-01-01
Linkage disequilibrium (LD) plays an important role in genomic selection and mapping quantitative trait loci (QTL). In this study, the pattern of LD and effective population size (Ne) were investigated in Chinese beef Simmental cattle. A total of 640 bulls were genotyped with IlluminaBovinSNP50BeadChip and IlluminaBovinHDBeadChip. We estimated LD for each autosomal chromosome at the distance between two random SNPs of <0 to 25 kb, 25 to 50 kb, 50 to 100 kb, 100 to 500 kb, 0.5 to 1 Mb, 1 to 5 Mb and 5 to 10 Mb. The mean values of r2 were 0.30, 0.16 and 0.08, when the separation between SNPs ranged from 0 to 25 kb to 50 to 100 kb and then to 0.5 to 1 Mb, respectively. The LD estimates decreased as the distance increased in SNP pairs, and increased with the increase of minor allelic frequency (MAF) and with the decrease of sample sizes. Estimates of effective population size for Chinese beef Simmental cattle decreased in the past generations and Ne was 73 at five generations ago. PMID:25049849
Estimating the eigenstates of an unknown density operator without damaging it
Jingliang Gao; Feng Cai
2014-10-21
Given n qubits prepared according to the same unknown density operator, we propose a nondestructive measuring method which approximately yields the eigenstates. It is shown that, for any plane which passes through the center point of the Bloch sphere, there exists corresponding projective measurement. By performing these measurements, we can scan the whole Bloch sphere like radar to search for the orientation of and determine the eigenstates. We show the convergency of the measurements. This result actually reveals a mathematical structure of the n-fold Hilbert space .
Estimating $\\eta/s$ of QCD matter at high baryon densities
Karpenko, Iu; Huovinen, P; Petersen, H
2015-01-01
We report on the application of a cascade + viscous hydro + cascade model for heavy ion collisions in the RHIC Beam Energy Scan range, $\\sqrt{s_{\\rm NN}}=6.3\\dots200$ GeV. By constraining model parameters to reproduce the data we find that the effective(average) value of the shear viscosity over entropy density ratio $\\eta/s$ decreases from 0.2 to 0.08 when collision energy grows from $\\sqrt{s_{\\rm NN}}\\approx7$ to 39 GeV.
Bell, David M; Ward, Eric J; Oishi, A Christopher; Oren, Ram; Flikkema, Paul G; Clark, James S
2015-07-01
Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as canopy conductance and transpiration. To address this need, we developed a hierarchical Bayesian State-Space Canopy Conductance (StaCC) model linking canopy conductance and transpiration to tree sap flux density from a 4-year experiment in the North Carolina Piedmont, USA. Our model builds on existing ecophysiological knowledge, but explicitly incorporates uncertainty in canopy conductance, internal tree hydraulics and observation error to improve estimation of canopy conductance responses to atmospheric drought (i.e., vapor pressure deficit), soil drought (i.e., soil moisture) and above canopy light. Our statistical framework not only predicted sap flux observations well, but it also allowed us to simultaneously gap-fill missing data as we made inference on canopy processes, marking a substantial advance over traditional methods. The predicted and observed sap flux data were highly correlated (mean sensor-level Pearson correlation coefficient = 0.88). Variations in canopy conductance and transpiration associated with environmental variation across days to years were many times greater than the variation associated with model uncertainties. Because some variables, such as vapor pressure deficit and soil moisture, were correlated at the scale of days to weeks, canopy conductance responses to individual environmental variables were difficult to interpret in isolation. Still, our results highlight the importance of accounting for uncertainty in models of ecophysiological and ecosystem function where the process of interest, canopy conductance in this case, is not observed directly. The StaCC modeling framework provides a statistically coherent approach to estimating canopy conductance and transpiration and propagating estimation uncertainty into ecosystem models, paving the way for improved prediction of water and carbon uptake responses to environmental change. PMID:26063709
Estimating the effective density of engineered nanomaterials for in vitro dosimetry
DeLoid, Glen; Cohen, Joel M.; Darrah, Tom; Derk, Raymond; Wang, Liying; Pyrgiotakis, Georgios; Wohlleben, Wendel; Demokritou, Philip
2014-01-01
The need for accurate in vitro dosimetry remains a major obstacle to the development of cost-effective toxicological screening methods for engineered nanomaterials. An important key to accurate in vitro dosimetry is the characterization of sedimentation and diffusion rates of nanoparticles suspended in culture media, which largely depend upon the effective density and diameter of formed agglomerates in suspension. Here we present a rapid and inexpensive method for accurately measuring the effective density of nano-agglomerates in suspension. This novel method is based on the volume of the pellet obtained by bench-top centrifugation of nanomaterial suspensions in a packed cell volume tube, and is validated against gold-standard analytical ultracentrifugation data. This simple and cost-effective method allows nanotoxicologists to correctly model nanoparticle transport, and thus attain accurate dosimetry in cell culture systems, which will greatly advance the development of reliable and efficient methods for toxicological testing and investigation of nano-bio interactions in vitro. PMID:24675174
Individual movements and population density estimates for moray eels on a Caribbean coral reef
NASA Astrophysics Data System (ADS)
Abrams, R. W.; Schein, M. W.
1986-12-01
Observations of moray eel (Muraenidae) distribution made on a Caribbean coral reef are discussed in the context of long term population trends. Observations of eel distribution made using SCUBA during 1978, 1979 1980, and 1984 are compared and related to the occurrence of a hurricane in 1979. An estimate of the mean standing stock of moray eels is presented. The degree of site attachment is discussed for spotted morays ( Gymnothorax moringa) and goldentail morays ( Muraena miliaris). The repeated non-aggressive association of moray eels with large aggregations of potential prey fishes is detailed.
Systematic Parameter Estimation of a Density-Dependent Groundwater-Flow and Solute-Transport Model
NASA Astrophysics Data System (ADS)
Stanko, Z.; Nishikawa, T.; Traum, J. A.
2013-12-01
A SEAWAT-based, flow and transport model of seawater-intrusion was developed for the Santa Barbara groundwater basin in southern California that utilizes dual-domain porosity. Model calibration can be difficult when simulating flow and transport in large-scale hydrologic systems with extensive heterogeneity. To facilitate calibration, the hydrogeologic properties in this model are based on the fraction of coarse and fine-grained sediment interpolated from drillers' logs. This approach prevents over-parameterization by assigning one set of parameters to coarse material and another set to fine material. Estimated parameters include boundary conditions (such as areal recharge and surface-water seepage), hydraulic conductivities, dispersivities, and mass-transfer rate. As a result, the model has 44 parameters that were estimated by using the parameter-estimation software PEST, which uses the Gauss-Marquardt-Levenberg algorithm, along with various features such as singular value decomposition to improve calibration efficiency. The model is calibrated by using 36 years of observed water-level and chloride-concentration measurements, as well as first-order changes in head and concentration. Prior information on hydraulic properties is also provided to PEST as additional observations. The calibration objective is to minimize the squared sum of weighted residuals. In addition, observation sensitivities are investigated to effectively calibrate the model. An iterative parameter-estimation procedure is used to dynamically calibrate steady state and transient simulation models. The resulting head and concentration states from the steady-state-model provide the initial conditions for the transient model. The transient calibration provides updated parameter values for the next steady-state simulation. This process repeats until a reasonable fit is obtained. Preliminary results from the systematic calibration process indicate that tuning PEST by using a set of synthesized observations generated from model output reduces execution times significantly. Parameter sensitivity analyses indicate that both simulated heads and chloride concentrations are sensitive to the ocean boundary conductance parameter. Conversely, simulated heads are sensitive to some parameters, such as specific fault conductances, but chloride concentrations are insensitive to the same parameters. Heads are specifically found to be insensitive to mobile domain texture but sensitive to hydraulic conductivity and specific storage. The chloride concentrations are insensitive to some hydraulic conductivity and fault parameters but sensitive to mass transfer rate and longitudinal dispersivity. Future work includes investigating the effects of parameter and texture characterization uncertainties on seawater intrusion simulations.
Sperling, Or; Shapira, Or; Cohen, Shabtai; Tripler, Effi; Schwartz, Amnon; Lazarovitch, Naftali
2012-09-01
In a world of diminishing water reservoirs and a rising demand for food, the practice and development of water stress indicators and sensors are in rapid progress. The heat dissipation method, originally established by Granier, is herein applied and modified to enable sap flow measurements in date palm trees in the southern Arava desert of Israel. A long and tough sensor was constructed to withstand insertion into the date palm's hard exterior stem. This stem is wide and fibrous, surrounded by an even tougher external non-conducting layer of dead leaf bases. Furthermore, being a monocot species, water flow does not necessarily occur through the outer part of the palm's stem, as in most trees. Therefore, it is highly important to investigate the variations of the sap flux densities and determine the preferable location for sap flow sensing within the stem. Once installed into fully grown date palm trees stationed on weighing lysimeters, sap flow as measured by the modified sensors was compared with the actual transpiration. Sap flow was found to be well correlated with transpiration, especially when using a recent calibration equation rather than the original Granier equation. Furthermore, inducing the axial variability of the sap flux densities was found to be highly important for accurate assessments of transpiration by sap flow measurements. The sensors indicated no transpiration at night, a high increase of transpiration from 06:00 to 09:00, maximum transpiration at 12:00, followed by a moderate reduction until 08:00; when transpiration ceased. These results were reinforced by the lysimeters' output. Reduced sap flux densities were detected at the stem's mantle when compared with its center. These results were reinforced by mechanistic measurements of the stem's specific hydraulic conductivity. Variance on the vertical axis was also observed, indicating an accelerated flow towards the upper parts of the tree and raising a hypothesis concerning dehydrating mechanisms of the date palm tree. Finally, the sensors indicated reduction in flow almost immediately after irrigation of field-grown trees was withheld, at a time when no climatic or phenological conditions could have led to reduction in transpiration. PMID:22887479
Estimation of effective hydrologic properties of soils from observations of vegetation density
NASA Technical Reports Server (NTRS)
Tellers, T. E.; Eagleson, P. S.
1980-01-01
A one-dimensional model of the annual water balance is reviewed. Improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate soil system, is verified through comparisons with observed data. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides verification of the soil-selection procedure. This method of parameterization of the land surface is useful with global circulation models, enabling them to account for both the nonlinearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.
Optimal spectrum estimation of density operators with alkaline-earth atoms
NASA Astrophysics Data System (ADS)
Gorshkov, Alexey
2015-03-01
The eigenspectrum p --> ? (p1 ,p2 , . .pd) of the density operator ?& circ; describing the state of a quantum system can be used to characterize the entanglement of this system with its environment. In the seminal paper [Phys. Rev. A 64, 052311 (2001)], Keyl and Werner present the optimal measurement scheme for inferring p --> given n copies of an unknown state ?& circ;. Since this measurement uses a highly entangled basis over the full joint state ?& circ; ? n of all copies, it should naively be extremely difficult to implement in practice. In this talk, we give a simple experimental protocol to carry out the Keyl-Werner measurement for ?& circ; on the nuclear spin degrees of freedom of n alkaline-earth atoms using standard Ramsey spectroscopy techniques.
NASA Astrophysics Data System (ADS)
Kalimullina, L. R.; Nafikova, E. P.; Asfandiarov, N. L.; Chizhov, Yu. V.; Baibulova, G. Sh.; Zhdanov, E. R.; Gadiev, R. M.
2015-03-01
A number of compounds related to quinone derivatives is investigated by means of density functional theory in the B3LYP/6-31G(d) mode. Vertical electron affinity E va and/or electron affinity E a for the investigated compounds are known from experiments. The correlation between the calculated energies of ?* molecular orbitals with the E va values measured via electron transmission spectroscopy is determined with a coefficient of 0.96. It is established that theoretical values of the adiabatic electron affinity, calculated as the difference between the total energies of a neutral molecule and a radical anion, correlate with E a values determined from electron transfer experiments with a correlation coefficient of 0.996.
NASA Astrophysics Data System (ADS)
Hloupis, G.; Vallianatos, F.
2015-09-01
The purpose of this study is to demonstrate the use of wavelet transform (WT) as the common processing tool for earthquake's rapid magnitude determination and epicentral estimation. The goal is to use the same set of wavelet coefficients that characterize the seismogram (and especially its P-wave portion) to use one technique (WT) for double use (magnitude and location estimation). Wavelet magnitude estimation (WME) is used to derive a scaling relation between earthquake's magnitude and wavelet coefficients for South Aegean using data from 469 events with magnitudes from 3.8 to 6.9. The performance of the proposed relation was evaluated using data from 40 additional events with magnitude from 3.8 to 6.2. In addition, the epicentral estimation is achieved by a new proposed method (wavelet epicentral estimation—WEpE) which is based on the combination of wavelet azimuth estimation and two stations' sub array method. Following the performance investigation of WEpE method, we present results and simulations with real data from characteristic events that occurred in South Aegean. Both methods can be run in parallel, providing in this way a suitable core of a regional earthquake early warning system in South Aegean.
Lawrence M. Krauss; Brian Chaboyer
2001-11-30
New estimates of globular cluster distances, combined with revised ranges for input parameters in stellar evolution codes and recent estimates of the earliest redshift of cluster formation allow us to derive a new 95% confidence level lower limit on the age of the Universe of 11 Gyr. This is now definitively inconsistent with the expansion age for a flat Universe for the currently allowed range of the Hubble constant unless the cosmic equation of state is dominated by a component that violates the strong energy condition. This solidifies the case for a dark energy-dominated universe, complementing supernova data and direct measurements of the geometry and matter density in the Universe. The best-fit age is consistent with a cosmological constant-dominated (w=pressure/energy density = -1) universe. For the Hubble Key project best fit value of the Hubble Constant our age limits yields the constraints w < -0.4 and Omega_matter < 0.38 at the 68 % confidence level, and w < -0.26 and Omega_matter < 0.58 at the 95 % confidence level.
Estimation of Heavy Ion Densities From Linearly Polarized EMIC Waves At Earth
Kim, Eun-Hwa; Johnson, Jay R.; Lee, Dong-Hun
2014-02-24
Linearly polarized EMIC waves are expected to concentrate at the location where their wave frequency satisfies the ion-ion hybrid (IIH) resonance condition as the result of a mode conversion process. In this letter, we evaluate absorption coefficients at the IIH resonance in the Earth geosynchronous orbit for variable concentrations of helium and azimuthal and field-aligned wave numbers in dipole magnetic field. Although wave absorption occurs for a wide range of heavy ion concentration, it only occurs for a limited range of azimuthal and field-aligned wave numbers such that the IIH resonance frequency is close to, but not exactly the same as the crossover frequency. Our results suggest that, at L = 6.6, linearly polarized EMIC waves can be generated via mode conversion from the compressional waves near the crossover frequency. Consequently, the heavy ion concentration ratio can be estimated from observations of externally generated EMIC waves that have polarization.
King, Tania L.; Thornton, Lukar E.; Bentley, Rebecca J.; Kavanagh, Anne M.
2015-01-01
Background Local destinations have previously been shown to be associated with higher levels of both physical activity and walking, but little is known about how the distribution of destinations is related to activity. Kernel density estimation is a spatial analysis technique that accounts for the location of features relative to each other. Using kernel density estimation, this study sought to investigate whether individuals who live near destinations (shops and service facilities) that are more intensely distributed rather than dispersed: 1) have higher odds of being sufficiently active; 2) engage in more frequent walking for transport and recreation. Methods The sample consisted of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Destinations within these areas were geocoded and kernel density estimates of destination intensity were created using kernels of 400m (meters), 800m and 1200m. Using multilevel logistic regression, the association between destination intensity (classified in quintiles Q1(least)—Q5(most)) and likelihood of: 1) being sufficiently active (compared to insufficiently active); 2) walking?4/week (at least 4 times per week, compared to walking less), was estimated in models that were adjusted for potential confounders. Results For all kernel distances, there was a significantly greater likelihood of walking?4/week, among respondents living in areas of greatest destinations intensity compared to areas with least destination intensity: 400m (Q4 OR 1.41 95%CI 1.02–1.96; Q5 OR 1.49 95%CI 1.06–2.09), 800m (Q4 OR 1.55, 95%CI 1.09–2.21; Q5, OR 1.71, 95%CI 1.18–2.48) and 1200m (Q4, OR 1.7, 95%CI 1.18–2.45; Q5, OR 1.86 95%CI 1.28–2.71). There was also evidence of associations between destination intensity and sufficient physical activity, however these associations were markedly attenuated when walking was included in the models. Conclusions This study, conducted within urban Melbourne, found that those who lived in areas of greater destination intensity walked more frequently, and showed higher odds of being sufficiently physically active–an effect that was largely explained by levels of walking. The results suggest that increasing the intensity of destinations in areas where they are more dispersed; and or planning neighborhoods with greater destination intensity, may increase residents’ likelihood of being sufficiently active for health. PMID:26355848
Gutiérrez-Redomero, Esperanza; Rivaldería, Noemí; Alonso-Rodríguez, Concepción; Sánchez-Andrés, Ángeles
2014-05-01
In recent times, some studies have explored the forensic application of dermatoglyphic traits such as the epidermal ridge breadth or ridge density (RD) toward the inference of sex and population from fingerprints of unknown origin, as it has been demonstrated that there exist significant differences of fingerprints between sexes and between populations. Part of the population differences found between these studies could be of methodological nature, due both to the lack of standardisation in the position of the counting area, as well as to the differences in the method used for obtaining the fingerprint. Therefore, the aim of this study was to check whether there are differences between the RD of fingerprints depending on where the counting area is placed and how the fingerprints are obtained. Fingerprints of each finger were obtained from 102 adult Spanish subjects (50 females and 52 males), using two methods (plain and rolled). The ridge density of each fingerprint was assessed in five different areas of the dactylogram: two closer to the core area (one on the radial and the other on the ulnar side), two closer to the outermost area of each of the sides (radial and ulnar), and another one in the proximal region of the fingertip. Regardless of the method used and of the position of the counting area, thumbs and forefingers show a higher RD than middle, ring, and little fingers in both sexes, and females present a higher RD than males in all areas and fingers. In both males and females, RD values on the core region are higher than those on the outer region, irrespective of the technique of fingerprinting used (rolled or plain). Regardless of the sex and location of the count area (core or outer), the rolled fingerprints exhibit RD greater than that of the plain ones in both radial and proximal areas, whereas the trend is inverted in the ulnar area, where rolled fingerprints demonstrate RD lesser than that of the plain ones. Therefore, in order for the results of different studies to be comparable, it is necessary to standardise the position of the count area and to use the same method of obtaining the fingerprint, especially when involving a forensic application. PMID:24796949
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer to true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.
Medeiros, Stephen; Hagen, Scott; Weishampel, John; Angelo, James
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore »true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less
NASA Astrophysics Data System (ADS)
Ebrahimi, A.; Habibi Khorassani, S. M.; Delarami, H.
2009-11-01
Individual hydrogen bond (HB) energies have been estimated in several systems involving multiple HBs such as adenine-thymine and guanine-cytosine using electron charge densities calculated at X⋯H hydrogen bond critical points (HBCPs) by atoms in molecules (AIM) method at B3LYP/6-311++G ?? and MP2/6-311++G ?? levels. A symmetrical system with two identical H bonds has been selected to search for simple relations between ?HBCP and individual EHB. Correlation coefficient between EHB and ?HBCP in the base of linear, quadratic, and exponential equations are acceptable and equal to 0.95. The estimated individual binding energies EHB are in good agreement with the results of atom-replacement approach and natural bond orbital analysis (NBO). The EHB values estimated from ? values at H⋯X BCP are in satisfactory agreement with the main geometrical parameter H⋯X. With respect to the obtained individual binding energies, the strength of a HB depends on the substituent and the cooperative effects of other HBs.
Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia; Breitbart, Mya; Edwards, Robert A.
2015-01-01
Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set of publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. We propose adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution. PMID:26005436
Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia; Breitbart, Mya; Edwards, Robert A.
2015-05-08
Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set ofmore »publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. By adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution.« less
Aziz, Ramy K.; Dwivedi, Bhakti; Akhter, Sajia; Breitbart, Mya; Edwards, Robert A.
2015-05-08
Phages are the most abundant biological entities on Earth and play major ecological roles, yet the current sequenced phage genomes do not adequately represent their diversity, and little is known about the abundance and distribution of these sequenced genomes in nature. Although the study of phage ecology has benefited tremendously from the emergence of metagenomic sequencing, a systematic survey of phage genes and genomes in various ecosystems is still lacking, and fundamental questions about phage biology, lifestyle, and ecology remain unanswered. To address these questions and improve comparative analysis of phages in different metagenomes, we screened a core set of publicly available metagenomic samples for sequences related to completely sequenced phages using the web tool, Phage Eco-Locator. We then adopted and deployed an array of mathematical and statistical metrics for a multidimensional estimation of the abundance and distribution of phage genes and genomes in various ecosystems. Experiments using those metrics individually showed their usefulness in emphasizing the pervasive, yet uneven, distribution of known phage sequences in environmental metagenomes. Using these metrics in combination allowed us to resolve phage genomes into clusters that correlated with their genotypes and taxonomic classes as well as their ecological properties. By adding this set of metrics to current metaviromic analysis pipelines, where they can provide insight regarding phage mosaicism, habitat specificity, and evolution.
Thomas, Len
a passive acoustic method for estimating the density of echolocating cetaceans that dive synchronously assumes that all dive starts of the target species within a defined area are detected starts or detected clicks) to density is the key hurdle. Defining the area monitored in the dive count
Madureira, Tânia Vieira; Lopes, Célia; Malhão, Fernanda; Rocha, Eduardo
2015-02-01
Accurately accessing changes in the intracellular volumes (or numbers) of peroxisomes within a cell can be a lengthy task, because unbiased estimations can be made only by studies conducted under transmission electron microscopy. Yet, such information is often required, namely for correlations with functional data. The optimization and applicability of a fast and new technical proceeding based on catalase immunofluorescence was implemented herein by using primary hepatocytes from brown trout (Salmo trutta f. fario), exposed during 96 h to two distinct treatments (0.1% ethanol and 50 µM of 17?-ethynylestradiol). The time and cost efficiency, together with the results obtained by stereological analyses, specifically directed to the volume densities of peroxisomes, and additionally of the nucleus in relation to the hepatocyte, were compared with the well-established 3,3'-diaminobenzidine cytochemistry for electron microscopy. With the immuno technique it was possible to correctly distinguish punctate peroxisomal profiles, allowing the selection of the marked organelles for quantification. By both methodologies, a significant reduction in the volume density of the peroxisome within the hepatocyte was obtained after an estrogenic input. The most interesting point here was that the volume density ratios were quite correlated between both techniques. Overall, the immunofluorescence protocol for catalase was evidently faster, cheaper and provided reliable quantitative data that discriminated in the same way the compared groups. After this validation study, we recommend the use of catalase immunofluorescence as the first option for rapid screening of changes of the amount of hepatocytic peroxisomes, using their volume density as an indicator. PMID:25431324
Mobile sailing robot for automatic estimation of fish density and monitoring water quality
2013-01-01
Introduction The paper presents the methodology and the algorithm developed to analyze sonar images focused on fish detection in small water bodies and measurement of their parameters: volume, depth and the GPS location. The final results are stored in a table and can be exported to any numerical environment for further analysis. Material and method The measurement method for estimating the number of fish using the automatic robot is based on a sequential calculation of the number of occurrences of fish on the set trajectory. The data analysis from the sonar concerned automatic recognition of fish using the methods of image analysis and processing. Results Image analysis algorithm, a mobile robot together with its control in the 2.4 GHz band and full cryptographic communication with the data archiving station was developed as part of this study. For the three model fish ponds where verification of fish catches was carried out (548, 171 and 226 individuals), the measurement error for the described method was not exceeded 8%. Summary Created robot together with the developed software has features for remote work also in the variety of harsh weather and environmental conditions, is fully automated and can be remotely controlled using Internet. Designed system enables fish spatial location (GPS coordinates and the depth). The purpose of the robot is a non-invasive measurement of the number of fish in water reservoirs and a measurement of the quality of drinking water consumed by humans, especially in situations where local sources of pollution could have a significant impact on the quality of water collected for water treatment for people and when getting to these places is difficult. The systematically used robot equipped with the appropriate sensors, can be part of early warning system against the pollution of water used by humans (drinking water, natural swimming pools) which can be dangerous for their health. PMID:23815984
Kim, Mijin; Hyun, Seunghun; Kwon, Jung-Hwan
2015-10-01
The accumulation of marine plastic debris is one of the main emerging environmental issues of the twenty first century. Numerous studies in recent decades have reported the level of plastic particles on the beaches and in oceans worldwide. However, it is still unclear how much plastic debris remains in the marine environment because the sampling methods for identifying and quantifying plastics from the environment have not been standardized; moreover, the methods are not guaranteed to find all of the plastics that do remain. The level of identified marine plastic debris may explain only the small portion of remaining plastics. To perform a quantitative estimation of remaining plastics, a mass balance analysis was performed for high- and low-density PE within the borders of South Korea during 1995-2012. Disposal methods such as incineration, land disposal, and recycling accounted for only approximately 40 % of PE use, whereas 60 % remained unaccounted for. The total unaccounted mass of high- and low-density PE to the marine environment during the evaluation period was 28 million tons. The corresponding contribution to marine plastic debris would be approximately 25,000 tons and 70 g km(-2) of the world oceans assuming that the fraction entering the marine environment is 0.001 and that the degradation half-life is 50 years in seawater. Because the observed concentrations of plastics worldwide were much lower than the range expected by extrapolation from this mass balance study, it is considered that there probably is still a huge mass of unidentified plastic debris. Further research is therefore needed to fill this gap between the mass balance approximation and the identified marine plastics including a better estimation of the mass flux to the marine environment. PMID:26153107
NASA Astrophysics Data System (ADS)
Medhi, Biswajit; Hegde, Gopalkrishna Mahadeva; Reddy, Kalidevapura Polareddy Jagannath; Roy, Debasish; Vasu, Ram Mohan
2014-12-01
A simple method employing an optical probe is presented to measure density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a random dot pattern. Local slopes of the distorted wavefront are obtained from shifts of the dots in the pattern. Local shifts in the dots are accurately measured by cross-correlating local shifted shadows with the corresponding unshifted originals. The measured slopes are suitably unwrapped by using a discrete cosine transform based phase unwrapping procedure and also through iterative procedures. The unwrapped phase information is used in an iterative scheme for a full quantitative recovery of density distribution in the shock around the model through refraction tomographic inversion. Hypersonic flow field parameters around a missile shaped body at a free-stream Mach number of 5.8 measured using this technique are compared with the numerically estimated values.
Patel, Ameera X.; Bullmore, Edward T.
2015-05-03
fraction (SF ). SFt = 1 n(V ) · J · ? W˜j,t?V ? ? J? j=1 W˜j,t ? ? where V is the set of all brain voxels. (7) For a specified target “effective” window length (w), the dynamic window length D starting at time t was defined as: Dt = T? i=t SFi, (8) where...
A new wavelet-based approach for the automated treatment of large sets of lunar occultation data
O. Fors; A. Richichi; X. Otazu; J. Nunez
2008-01-14
The introduction of infrared arrays for lunar occultations (LO) work and the improvement of predictions based on new deep IR catalogues have resulted in a large increase in the number of observable occultations. We provide the means for an automated reduction of large sets of LO data. This frees the user from the tedious task of estimating first-guess parameters for the fit of each LO lightcurve. At the end of the process, ready-made plots and statistics enable the user to identify sources which appear to be resolved or binary and to initiate their detailed interactive analysis. The pipeline is tailored to array data, including the extraction of the lightcurves from FITS cubes. Because of its robustness and efficiency, the wavelet transform has been chosen to compute the initial guess of the parameters of the lightcurve fit. We illustrate and discuss our automatic reduction pipeline by analyzing a large volume of novel occultation data recorded at Calar Alto Observatory. The automated pipeline package is available from the authors.
NASA Astrophysics Data System (ADS)
Martin, Roland; Monteiller, Vadim; Komatitsch, Dimitri; Perrouty, Stéphane; Jessell, Mark; Bonvalot, Sylvain; Lindsay, Mark
2013-12-01
We solve the 3-D gravity inverse problem using a massively parallel voxel (or finite element) implementation on a hybrid multi-CPU/multi-GPU (graphics processing units/GPUs) cluster. This allows us to obtain information on density distributions in heterogeneous media with an efficient computational time. In a new software package called TOMOFAST3D, the inversion is solved with an iterative least-square or a gradient technique, which minimizes a hybrid L1-/L2-norm-based misfit function. It is drastically accelerated using either Haar or fourth-order Daubechies wavelet compression operators, which are applied to the sensitivity matrix kernels involved in the misfit minimization. The compression process behaves like a pre-conditioning of the huge linear system to be solved and a reduction of two or three orders of magnitude of the computational time can be obtained for a given number of CPU processor cores. The memory storage required is also significantly reduced by a similar factor. Finally, we show how this CPU parallel inversion code can be accelerated further by a factor between 3.5 and 10 using GPU computing. Performance levels are given for an application to Ghana, and physical information obtained after 3-D inversion using a sensitivity matrix with around 5.37 trillion elements is discussed. Using compression the whole inversion process can last from a few minutes to less than an hour for a given number of processor cores instead of tens of hours for a similar number of processor cores when compression is not used.
NASA Astrophysics Data System (ADS)
Bandyopadhyay, M.; Sudhir, Dass; Chakraborty, A.
2015-03-01
An inductively coupled plasma (ICP) based negative hydrogen ion source is chosen for ITER neutral beam (NB) systems. To avoid regular maintenance in a radioactive environment with high flux of 14 MeV neutrons and gamma rays, invasive plasma diagnostics like probes are not included in the ITER NB source design. While, optical or microwave based diagnostics which are normally used in other plasma sources, are to be avoided in the case of ITER sources due to the overall system design and interface issues. In such situation, alternative forms of assessment to characterize ion source plasma become a necessity. In the present situation, the beam current through the extraction system in the ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition and negative ion stripping. Apart from that, the ICP production region radio frequency (RF) driver region) is placed far (?30 cm) from the extraction region. Therefore, there are uncertainties involved in linking the beam current with plasma properties inside the RF driver. To maintain the optimum condition for source operation it is necessary to maintain the optimum conditions in the driver. A method of characterization of the plasma density in the driver without using any invasive or non-invasive probes could be a useful tool to achieve that objective. Such a method, which is exclusively for ICP based ion sources, is presented in this paper. In this technique, plasma density inside the RF driver is estimated through the measurements of the electrical parameters in the RF power supply circuit path. Monitoring RF driver plasma through the described route will be useful during the source commissioning phase and also in the beam operation phase.
Jean-Richard, Vreni; Crump, Lisa; Abicho, Abbani Alhadj; Abakar, Ali Abba; Mahamat, Abdraman; Bechir, Mahamat; Eckert, Sandra; Engesser, Matthias; Schelling, Esther; Zinsstag, Jakob
2015-01-01
Mobile pastoralists provide major contributions to the gross domestic product in Chad, but little information is available regarding their demography. The Lake Chad area population is increasing, resulting in competition for scarce land and water resources. For the first time, the density of people and animals from mobile and sedentary populations was assessed using randomly defined sampling areas. Four sampling rounds were conducted over two years in the same areas to show population density dynamics. We identified 42 villages of sedentary communities in the sampling zones; 11 (in 2010) and 16 (in 2011) mobile pastoralist camps at the beginning of the dry season and 34 (in 2011) and 30 (in 2012) camps at the end of the dry season. A mean of 64.0 people per km2 (95% confidence interval, 20.3-107.8) were estimated to live in sedentary villages. In the mobile communities, we found 5.9 people per km2 at the beginning and 17.5 people per km2 at the end of the dry season. We recorded per km2 on average 21.0 cattle and 31.6 small ruminants in the sedentary villages and 66.1 cattle and 102.5 small ruminants in the mobile communities, which amounts to a mean of 86.6 tropical livestock units during the dry season. These numbers exceed, by up to five times, the published carrying capacities for similar Sahelian zones. Our results underline the need for a new institutional framework. Improved land use management must equally consider the needs of mobile communities and sedentary populations. PMID:26054513
Khazen, Michael; Warren, Ruth M.L.; Boggis, Caroline R.M.; Bryant, Emilie C.; Reed, Sadie; Warsi, Iqbal; Pointon, Linda J.; Kwan-Lim, Gek E.; Thompson, Deborah; Eeles, Ros; Easton, Doug; Evans, D. Gareth; Leach, Martin O.
2008-01-01
Purpose A method and computer-tool to estimate percentage MRI breast density using 3D T1-weighted Magnetic Resonance Imaging (MRI) is introduced, and compared with mammographic percentage density (XRM). Materials & Methods Ethical approval and informed consent were obtained. A method to assess MRI breast density as percentage volume occupied by water-containing tissue on 3D T1-weighted MR images is described and applied in a pilot study to 138 subjects who were imaged by both MRI and XRM during the MARIBS screening study. For comparison, percentage mammographic density was measured from matching XRMs as a ratio of dense to total projection areas scored visually using a 21 point score and measured by applying a 2D interactive program (CUMULUS). The MRI and XRM percent methods were compared, including assessment of left-right and inter-reader consistency. Results Percent MRI density correlated strongly (r=0.78, p<0.0001) with percent mammographic density estimated using Cumulus. Comparison with visual assessment also showed a strong correlation. The mammographic methods overestimate density compared with MRI volumetric assessment by a factor approaching 2. Discussion MRI provides direct 3D measurement of the proportion of water based tissue in the breast. It correlates well with visual and computerised percent mammographic density measurements. This method may have direct application in women having breast cancer screening by breast MRI and may aid in determination of risk. PMID:18768492
Niimi, Rei; Tsuchiyama, Akira; Kadono, Toshihiko; Okudaira, Kyoko; Hasegawa, Sunao; Tabata, Makoto; Watanabe, Takayuki; Yagishita, Masahito; Machii, Nagisa; Nakamura, Akiko M.; Uesugi, Kentaro; Takeuchi, Akihisa; Nakano, Tsukasa
2012-01-01
A large number of cometary dust particles were captured with low-density silica aerogel during the NASA Stardust mission. The dust particles penetrated into the aerogel and formed various track shapes. To estimate the properties of the dust particles, such as density and size, based on the morphology of the tracks, we carried out systematic experiments testing impacts into low-density aerogel at 6 km s{sup -1} using projectiles of various sizes and densities. We found that the maximum track diameter and the ratio of the track length to the maximum track diameter in aerogel are good indicators of projectile size and density, respectively. Based on these results, we estimated the size and density of individual dust particles from comet 81P/Wild 2. The average density of the 'fluffy' dust particles and the bulk density of all dust particles were obtained as 0.35 {+-} 0.07 and 0.49 {+-} 0.18 g cm{sup -3}, respectively. These statistical data provided the content of monolithic and coarse grains in the Stardust particles, {approx}30 wt%. Combining this result with some mid-infrared observational data, we found that the content of crystalline silicates is {approx}50 wt% or more of non-volatile material.
NASA Astrophysics Data System (ADS)
Buchert, Stephan C.; Eriksson, Anders; Gill, Reine; Nilsson, Thomas; Åhlen, Lennart; Wahlund, Jan-Erik; Knudsen, David; Burchill, Johnathan; Archer, William; Kouznetsov, Alexei; Stricker, Nico; Bouridah, Abderrazak; Bock, Ralph; Häggström, Ingemar; Rietveld, Michael; Gonzalez, Sixto; Aponte, Nestor
2014-05-01
The Langmuir Probes (LP) on the Swarm satellites are part of the Electric Field Instruments (EFI), which are featuring thermal ion imagers (TII) and so are measuring 3-d ion distributions. The main task of the Langmuir probes is to provide measurements of spacecraft potentials influencing the ions before they enter the TIIs. In addition also electron density (Ne) and temperature (Te) are estimated from EFI LP data. The design of the Swarm LP includes a standard current sampling under sweeps of the bias voltage, and also a novel ripple technique yielding derivatives of the current-voltage characteristics at three points in a rapid cycle. In normal mode the time resolution of the Ne and Te measurements so becomes only 0.5 s. We show first Ne and Te estimates from the EFI LPs obtained in the commissioning phase in December 2013, when all three satellites were following each other at about 500 km altitude at mutual distances of a few tens of kilometers. The LP data are compared with observations by incoherent scatter radars, namely EISCAT UHF, VHF, the ESR, and also Arecibo. Acknowledgements: The EFIs were developed and built by a consortium that includes COM DEV Canada, the University of Calgary, and the Swedish Institute for Space Physics in Uppsala. The Swarm EFI project is managed and funded by the European Space Agency with additional funding from the Canadian Space Agency. EISCAT is an international association supported by research organisations in China (CRIRP), Finland (SA), Japan (NIPR and STEL), Norway (NFR), Sweden (VR), and the United Kingdom (NERC). The Arecibo Observatory is operated by SRI International under a cooperative agreement with the National Science Foundation (AST-1100968), and in alliance with Ana G. Méndez-Universidad Metropolitana, and the Universities Space Research Association.
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641
Wu, Yunfeng; Shi, Lei
2011-04-01
Human locomotion is regulated by the central nervous system (CNS). The neurophysiological changes in the CNS due to amyotrophic lateral sclerosis (ALS) may cause altered gait cycle duration (stride interval) or other gait rhythm. This article used a statistical method to analyze the altered stride interval in patients with ALS. We first estimated the probability density functions (PDFs) of stride interval from the outlier-processed gait rhythm time series, by using the nonparametric Parzen-window approach. Based on the PDFs estimated, the mean of the left-foot stride interval and the modified Kullback-Leibler divergence (MKLD) can be computed to serve as dominant features. In the classification experiments, the least squares support vector machine (LS-SVM) with Gaussian kernels was applied to distinguish the stride patterns in ALS patients. According to the results obtained with the stride interval time series recorded from 16 healthy control subjects and 13 patients with ALS, the key findings of the present study are summarized as follows. (1) It is observed that the mean of stride interval computed based on the PDF for the left foot is correlated with that for the right foot in patients with ALS. (2) The MKLD parameter of the gait in ALS is significantly different from that in healthy controls. (3) The diagnostic performance of the nonlinear LS-SVM, evaluated by the leave-one-out cross-validation method, is superior to that obtained by the linear discriminant analysis. The LS-SVM can effectively separate the stride patterns between the groups of healthy controls and ALS patients with an overall accurate rate of 82.8% and an area of 0.869 under the receiver operating characteristic curve. PMID:21130016
NASA Technical Reports Server (NTRS)
Schieldge, John
2000-01-01
Wavelet and fractal analyses have been used successfully to analyze one-dimensional data sets such as time series of financial, physical, and biological parameters. These techniques have been applied to two-dimensional problems in some instances, including the analysis of remote sensing imagery. In this respect, these techniques have not been widely used by the remote sensing community, and their overall capabilities as analytical tools for use on satellite and aircraft data sets is not well known. Wavelet and fractal analyses have the potential to provide fresh insight into the characterization of surface properties such as temperature and emissivity distributions, and surface processes such as the heat and water vapor exchange between the surface and the lower atmosphere. In particular, the variation of sensible heat flux density as a function of the change In scale of surface properties Is difficult to estimate, but - in general - wavelets and fractals have proved useful in determining the way a parameter varies with changes in scale. We present the results of a limited study on the relationship between spatial variations in surface temperature distribution and sensible heat flux distribution as determined by separate wavelet and fractal analyses. We analyzed aircraft imagery obtained in the thermal infrared (IR) bands from the multispectral TIMS and hyperspectral MASTER airborne sensors. The thermal IR data allows us to estimate the surface kinetic temperature distribution for a number of sites in the Midwestern and Southwestern United States (viz., San Pedro River Basin, Arizona; El Reno, Oklahoma; Jornada, New Mexico). The ground spatial resolution of the aircraft data varied from 5 to 15 meters. All sites were instrumented with meteorological and hydrological equipment including surface layer flux measuring stations such as Bowen Ratio systems and sonic anemometers. The ground and aircraft data sets provided the inputs for the wavelet and fractal analyses, and the validation of the results.
2014-01-01
Background Hepatocellular carcinoma is a primary tumor of the liver and involves different treatment modalities according to the tumor stage. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. Methods 63 computed tomography (CT) slices from 23 patients were assessed. Non-contrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Results Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter?>?5 cm), whereas agreement lesser than 1.0 cm was found for small-sized tumors. Conclusions Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small decrease for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria. PMID:25064234
Mueller, M.; Madejski, G.
2009-05-20
The Method of Light Curve Simulations is a tool that has been applied to X-ray monitoring observations of Active Galactic Nuclei (AGN) for the characterization of the Power Density Spectrum (PDS) of temporal variability and measurement of associated break frequencies (which appear to be an important diagnostic for the mass of the black hole in these systems as well as their accretion state). It relies on a model for the PDS that is fit to the observed data. The determination of confidence regions on the fitted model parameters is of particular importance, and we show how the Neyman construction based on distributions of estimates may be implemented in the context of light curve simulations. We believe that this procedure offers advantages over the method used in earlier reports on PDS model fits, not least with respect to the correspondence between the size of the confidence region and the precision with which the data constrain the values of the model parameters. We plan to apply the new procedure to existing RXTE and XMM observations of Seyfert I galaxies as well as RXTE observations of the Seyfert II galaxy NGC 4945.
Toushmalani, Reza; Rahmati, Azizalah
2014-01-01
A gravity inversion method based on the Nettleton-Parasnis technique is used to estimate near surface density in an area without exposed outcrop or where outcrop occurrences do not adequately represent the subsurface rock densities. Its accuracy, however, strongly depends on how efficiently the regional trends and very local (terrain) effects are removed from the gravity anomalies processed. Nettleton's method implemented in a usual inversion scheme and combined with the simultaneous determination of terrain corrections. This method may lead to realistic density estimations of the topographical masses. The author applied this technique in the Bandar Charak (Hormozgan-Iran) with various geological/geophysical properties. These inversion results are comparable to both values obtained from density logs in the mentioned area and other methods like Fractal methods. The calculated densities are 2.4005 gr/cm3. The slightly higher differences between calculated densities and densities of the hand rock samples may be caused by the effect of sediment-filled valleys. PMID:25674438
NASA Astrophysics Data System (ADS)
Grziwa, S.; Korth, J.; Pätzold, M.
2014-04-01
The space telescopes CoRoT and Kepler provided a huge number of high-resolution stellar light curves. The light curves are searched for transit signals which may be produced by planets when passing in front of the stellar disc. Various flux variations by star spots, pulsation, flares, glitches, hot pixels etc., however, dominate the stellar light curves and mask faint transit signals in particular of small exoplanets, which may lead to missed candidates or a high rate of false detections. Full automated filtering and detection algorithms only make it possible to manage the huge number of stellar light curves to search for transits. This will become even more important in the future missions PLATO and TESS. The Rheinisches Institut für Umweltforschung (RIUPF) as one of the CoRoT detection teams has developed two model independent wavelet based filter techniques VARLET and PHALET to reduce the flux variability in light curves in order to improve the search for transits. The VARLET filter separates faint transit signals from stellar variations without using a-priori information of the target star. VARLET distinguishes variations by frequency, amplitude and shape. VARLET separates the large-scale variations from the white noise. The transit feature, however, is not extracted and still contained in the noise time series which makes it now much easier to search for transits by the search routine EXOTRANS (Grziwa, S. et al.2012 [1]). The PHALET filter is used to separate periodic features with well-known periods independent of their shape. With PHALET it is possible to separate detected diluting binaries and other periodic effects (e.g. disturbances caused by the spacecraft motion in the Earth orbit). The main purpose, however, is to separate already detected transits to search for transits from additional planets in the stellar systems. RIU-PF searched all Kepler light curves for planetary transits by including VARLET and PHALET in the processing pipeline. The results of that search is compared with the public Kepler candidate list. About 93% of the 2232 included systems in the newest Kepler candidate list were confirmed. New planetary systems (more than 20) and additional candidates (more than 15) to already known multi-planet systems, however, could be added to the list and will be presented.
Wavelet-based associative memory
NASA Astrophysics Data System (ADS)
Jones, Katharine J.
2004-04-01
Faces provide important characteristics of a person"s identification. In security checks, face recognition still remains the method in continuous use despite other approaches (i.e. fingerprints, voice recognition, pupil contraction, DNA scanners). With an associative memory, the output data is recalled directly using the input data. This can be achieved with a Nonlinear Holographic Associative Memory (NHAM). This approach can also distinguish between strongly correlated images and images that are partially or totally enclosed by others. Adaptive wavelet lifting has been used for Content-Based Image Retrieval. In this paper, adaptive wavelet lifting will be applied to face recognition to achieve an associative memory.
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal. PMID:22420280
Wood, Spencer
the density of northern red-backed voles (Myodes rutilus) and deer mice (Peromyscus maniculatus) in the boreal changes in the densities of small mammals, such as northern red-backed voles (Myodes rutilus) and deer (Krebs et al. 2014). In addition, both northern red-backed voles and deer mice can exhibit dramatic
Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina
2012-08-15
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r= 0.82, p < 0.001) and processed (r= 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r= 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's {kappa}{>=} 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.
maximumaposterioriestimation of continuous density hid- denMarkov models(CDHMM).TheclassicalMLEreestimation algorithms of a sufficientstatisticof fixeddimension is due to the underlying hid- den process,i.e.a mul
An Estimate of Solar Wind Density and Velocity Profiles in a Coronal Hole and a Coronal Streamer
NASA Technical Reports Server (NTRS)
Patzold, M.; Tsurutani, B. T.; Bird, M. K.
1996-01-01
Using the total electron content data obtained by the Ulysses Solar Corona Experiment (SCE) during the first solar conjunction in summer 1991, two data sets were selected, one associated with a coronal hole and the other associated with coronal streamer crossings. In order to determine coronal streamer density profiles, the electron content of the tracking passes embedded in a coronal streamer were corrected for the contributions from coronal hole densities.
NASA Technical Reports Server (NTRS)
Tomei, B. A.; Smith, L. G.
1986-01-01
Sounding rockets equipped to monitor electron density and its fine structure were launched into the auroral and equatorial ionosphere in 1980 and 1983, respectively. The measurement electronics are based on the Langmuir probe and are described in detail. An approach to the spectral analysis of the density irregularities is addressed and a software algorithm implementing the approach is given. Preliminary results of the analysis are presented.
NASA Astrophysics Data System (ADS)
Tanaka, Akiko; Nakano, Tsukasa; Ikehara, Ken
2011-02-01
X-ray computerized tomography (CT) analysis was used to image a half-round core sample of 50 cm long recovered from near Challenger Mound in the Porcupine Seabight, off western Ireland during the Integrated Ocean Drilling Program Expedition 307. This allowed three-dimensional examination of complex shapes of pebbles and ice-rafted debris in sedimentary sequences. X-ray CT analysis was also used for the determination of physical properties; a comparison between bulk density by the mass-volume method and estimated density based on linear attenuation coefficients of X-ray CT images provides insight into a spatially detailed and precise map of density variation in samples through the distribution of CT numbers.
NASA Astrophysics Data System (ADS)
Hosoi, Fumiki; Omasa, Kenji
Vertical plant area density profiles of wheat ( Triticum aestivum L.) canopy at different growth stages (tillering, stem elongation, flowering, and ripening stages) were estimated using high-resolution portable scanning lidar based on the voxel-based canopy profiling method. The canopy was scanned three-dimensionally by laser beams emitted from several measuring points surrounding the canopy. At the ripening stage, the central azimuth angle was inclined about 23 ? to the row direction to avoid obstruction of the beam into the lower canopy by the upper part. Plant area density profiles were estimated, with root mean square errors of 0.28-0.79 m 2 m -3 at each growth stage and of 0.45 m 2 m -3 across all growth stages. Plant area index was also estimated, with absolute errors of 4.7%-7.7% at each growth stage and of 6.1% across all growth stages. Based on lidar-derived plant area density, the area of each type of organ (stem, leaves, ears) per unit ground area was related to the actual dry weight of each organ type, and regression equations were obtained. The standard errors of the equations were 4.1 g m -2 for ears and 26.6 g m -2 for stems and leaves. Based on these equations, the estimated total dry weight was from 63.3 to 279.4 g m -2 for ears and from 35.8 to 375.3 g m -2 for stems and leaves across the growth stages. Based on the estimated dry weight at ripening and the ratio of carbon to dry weight in wheat plants, the carbon stocks were 76.3 g C m -2 for grain, 225.0 g C m -2 for aboveground residue, and 301.3 g C m -2 for all aboveground organs.
NASA Technical Reports Server (NTRS)
Desch, M. D.; Kaiser, M. L.
1984-01-01
Determinations by spacecraft of the low-frequency radio spectra and radiation beam geometry of the magnetospheres of earth, Jupiter, and Saturn now permit a reliable assessment of the overall efficiency of the solar wind in stimulating intense, nonthermal radio bursts from these magnetospheres. It is found that earlier estimates of how magnetospheric radio output scales with the solar wind energy input must be greatly revised, with the result that, while the efficiency is much lower than previously thought, it is remarkably uniform from planet to planet. A 'radimetric Bode's law' is formulated from which a planet's magnetic moment can be estimated from its radio emission output. This law is applied to estimate the low-frequency radio power likely to be measured for Uranus by Voyager 2. It is shown how measurements of Uranus's radio flux can be used to estimate the planetary magnetic moment and solar wind stand-off distance before the in situ measurements.
NASA Astrophysics Data System (ADS)
Zazoun, Réda Samy
2013-07-01
Fracture density estimation is an indisputable challenge in fractured reservoir characterization. Traditional techniques of fracture characterization from core data are costly, time consuming, and difficult to use for any extrapolation to non-cored wells. The aim of this paper is to construct a model able to predict fracture density from conventional well logs calibrated to core data by using artificial neural networks (ANNs). This technique was tested in the Cambro-Ordovician clastic reservoir from Mesdar oil field (Saharan platform, Algeria). For this purpose, 170 cores (2120.14 m) from 17 unoriented wells have been studied in detail. Seven training algorithms and eight neuronal network architectures were tested. The best architecture is a four layered [6-16-3-1] network model with: a six-neuron input layer (Gamma ray, Sonic interval transit time, Caliper, Neutron porosity, Bulk density logs and core depth), two hidden layers; the first hidden layer has 16 neurons, the second one has three neurons. And a one-neuron output layer (fracture density). The results based on 8094 data points from 13 wells show the excellent prediction ability of the conjugate gradient descent (CGD) training algorithm (R-squared = 0.812).The cross plot of measured and predicted values of fracture density shows a very high coefficient of determination of 0.848. Our studies have demonstrated a good agreement between our neural network model prediction and core fracture measurements. The results are promising and can be easily extended in other similar neighboring naturally fractured reservoirs.
Power Spectral and Coherence Estimation of Non-stationary Fields
NASA Astrophysics Data System (ADS)
Simons, F. J.
The estimation of spectral density functions and coherence functions is frequently ap- plied to analyze and characterize data as diverse as climate time series, seismograms, tomographic models of Earth structure, or gravity and topography data. In particu- lar the calculation of coherence functions, which measure the joint phase behavior of two random variables, has numerous applications. The spectral coherence of two- dimensional maps of gravity anomalies and topography is indicative of the mechani- cal strength of the lithosphere. We focus on methods to retrieve spatial, azimuthal and wavelength-dependent variations of coherence with multi-window methods. The Thomson multitaper method uses Slepian windows to produce low-variance, minimum-bias spectrum and coherence estimates of stationary fields. This Slepian multitaper method can be applied with sliding windows for fields with nonstationary properties. Like the periodogram is used to assess the frequency concentration of win- dows for stationary multi-window spectrum estimation, the Wigner-Ville distribution characterizes the time-frequency properties of windowing functions for the estimation of nonstationary properties. With Slepian windows, the mapping of the phase space is in terms of rectangular domains of time (or space) and frequency. However, a simulta- neous optimization of concentration in time and frequency leads to a set of orthogonal Hermite windows for time-frequency (spectrogram) estimation, and to Morse wavelet transforms in the time-scale (scalogram) case. When Hermite windowing functions are used for multitaper spectral analysis, the resolution kernel of the estimate is circu- larly symmetric in the time-frequency plane. Time and frequency resolution are not at the expense of each other, and in the case of two-dimensional fields, the isotropy of the kernel allows the robust characterization of anisotropic properties. We review the properties and utility of various windowing techniques for the spectral characterization of stationary and nonstationary fields, and give examples of one and two-dimensional univariate stochastic processes (climate series, random data, seismo- grams, reactor noise) and multi-dimensional multivariate processes, with an emphasis on the anisotropic estimation of the coherence between gravity and topography data. We discuss the properties and advantages of the Hermite multi-spectrogram method with respect to wavelet-based methods, and apply the former to the coherence estima- tion between Bouguer anomalies and topography of Australia for an analysis of the spatially varying anisotropic mechanical strength of the continent.
Technology Transfer Automated Retrieval System (TEKTRAN)
Trees, even in the same orchard or nursery, can have considerably different structures and foliage densities. Conventional chemical applications often spray the entire field at a constant rate without considering field variations, resulting in excessive chemical waste and spray drift. To address thi...
Butti, Camilla; Corain, Livio; Cozzi, Bruno; Podestà, Michela; Pirone, Andrea; Affronte, Marco; Zotti, Alessandro
2007-01-01
The determination of age is an important step in defining the life history traits of individuals and populations. Age determination of odontocetes is mainly based on counting annual growth layer groups in the teeth. However, this useful method is always invasive, requiring the cutting of at least one tooth, and sometimes the results are difficult to interpret. Based on the concept that bone matrix is constantly deposited throughout life, we analysed the bone mineral density of the arm and forearm of a series of bottlenose dolphins (Tursiops truncatus, Montagu 1821) stranded along the Italian coast of the Adriatic Sea or maintained in confined waters. The bone mineral density values we obtained were evaluated as possible age predictors of the Mediterranean population of this species, considering age as determined by counting growth layer groups in sections of the teeth and the total body length of the animal as references. Comparisons between left and right flipper showed no difference. Our results show that bone mineral density values of the thoracic limb are indeed reliable age predictors in Tursiops truncatus. Further investigations in additional odontocete species are necessary to provide strong evidence of the reliability of bone mineral density as an indicator of growth and chronological wear and tear in toothed-whales. PMID:17850286
NASA Astrophysics Data System (ADS)
Catalano, George D.
1998-06-01
The effects of noise modulation on the power spectral density functions of a sinusoidal wave are calculated in closed form. Frequency, phase, and amplitude modulation are considered. Noise processes are modeled using Butterworth filters of various integer orders. Increasing the order of the Butterworth filter increases the signal-to-noise ratio of the modulated sinusoidal wave.
NASA Astrophysics Data System (ADS)
Catalano, George D.
1998-03-01
The effects of noise modulation on the power spectral density functions of a sinusoidal wave are calculated in closed form. Frequency, phase, and amplitude modulation are considered. Noise processes are modeled using Butterworth filters of various integer orders. Both stationary and nonstationary noise processes are included with Daubechies wavelet filters used for the nonsteady case.
NASA Astrophysics Data System (ADS)
Jan?, Zden?k; Soukup, František
2014-06-01
We have measured an ac magnetic susceptibility of recent second generation high temperature superconductor wires (coated conductor tapes) both as a function of an amplitude of an applied ac magnetic field at fixed temperatures and as a function of temperature at the fixed amplitudes. We find that data acquired by both methods are well described by ac susceptibility calculated on the basis of the Clem-Sanchez model to a response of thin superconducting disks in the Bean critical state to an applied perpendicular ac magnetic field. We show how to link the empirical data dependent on the field amplitude or temperature with theoretical data dependent on the ratio between the critical current density and field amplitude. The critical depinning current densities and its temperature dependence found by both methods are in good agreement. We discuss and compare accuracy and time saving of both methods.
NASA Astrophysics Data System (ADS)
Xu, L.; He, N. P.; Yu, G. R.; Wen, D.; Gao, Y.; He, H. L.
2015-08-01
Accurate estimation of soil organic carbon (SOC) storage is important for evaluating carbon sequestration of terrestrial ecosystems at regional scale. How the selected pedotransfer functions (PTFs) of bulk density (BD) influence the estimates of SOC storage is still unclear at large scales, although BD is an important parameter in all equations. Here we used data from the second national soil survey in China (8210 soil profiles) to evaluate the influence of eight selected PTFs on the estimation of SOC storage. The results showed that different PTFs may result in a higher uncertainty of SOC storage estimation and the coefficient of variation (CV, %) for the eight PTFs varied from 10.61% to 70.46% (mean = 12.75%). The observed CV values were higher in the 0-20 cm layer (12.48%) than in the 20-100 cm layer (10.05%). CV values were relatively stable (10-15%) when SOC content ranged from 0.13% to 3.45%. The findings indicate that PTFs may be used cautiously in soils with higher or lower SOC content. Estimates of SOC storage in the 0-100 cm soil layer varied from 67.19 to 95.97 Pg C in the eight PTFs in China, with an average of 87.36 ± 8.93 Pg C (CV = 10.23%). Our findings provide the insight that differences in PTFs are important sources of uncertainty in SOC estimates. The development of better PFTs, or the integration of various PFTs, is essential to accurately estimate SOC storage at regional scales.
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1981-01-01
A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.
Edwards, T. B.; Peeler, D. K.; Kot, W. K.; Gan, H.; Pegg, I. L.
2013-04-30
The Department of Energy – Savannah River (DOE-SR) has provided direction to Savannah River Remediation (SRR) to maintain fissile concentration in glass below 897 g/m{sup 3}. In support of that guidance, the Savannah River National Laboratory (SRNL) provided a technical basis and a supporting Microsoft® Excel® spreadsheet for the evaluation of fissile loading in Sludge Batch 5 (SB5), Sludge Batch 6 (SB6), Sludge Batch 7a (SB7a), and Sludge Batch 7b (SB7b) glass based on the iron (Fe) concentration in glass as determined by the measurements from the Slurry Mix Evaporator (SME) acceptability analysis. SRR has since requested that the necessary density information be provided to allow SRR to update the Excel® spreadsheet so that it may be used to maintain fissile concentration in glass below 897 g/m{sup 3} during the processing of Sludge Batch 8 (SB8). One of the primary inputs into the fissile loading spreadsheet includes an upper bound for the density of SB8-based glasses. Thus, these bounding density values are to be used to assess the fissile concentration in this glass system. It should be noted that no changes are needed to the underlying structure of the Excel-based spreadsheet to support fissile assessments for SB8. However, SRR should update the other key inputs to the spreadsheet that are based on fissile and Fe concentrations reported from the SB8 Waste Acceptance Product Specification (WAPS) sample.
Rittenhouse, Chadwick D.; Millspaugh, Joshua J.; Rittenhouse, Tracy A.G.
2015-01-01
Box turtles (Terrapene carolina) are widely distributed but vulnerable to population decline across their range. Using distance sampling, morphometric data, and an index of carapace damage, we surveyed three-toed box turtles (Terrapene carolina triunguis) at 2 sites in central Missouri, and compared differences in detection probabilities when transects were walked by one or two observers. Our estimated turtle densities within forested cover was less at the Thomas S. Baskett Wildlife Research and Education Center, a site dominated by eastern hardwood forest (d = 1.85 turtles/ha, 95% CI [1.13, 3.03]) than at the Prairie Fork Conservation Area, a site containing a mix of open field and hardwood forest (d = 4.14 turtles/ha, 95% CI [1.99, 8.62]). Turtles at Baskett were significantly older and larger than turtles at Prairie Fork. Damage to the carapace did not differ significantly between the 2 populations despite the more prevalent habitat management including mowing and prescribed fire at Prairie Fork. We achieved improved estimates of density using two rather than one observer at Prairie Fork, but negligible differences in density estimates between the two methods at Baskett. Error associated with probability of detection decreased at both sites with the addition of a second observer. We provide demographic data on three-toed box turtles that suggest the use of a range of habitat conditions by three-toed box turtles. This case study suggests that habitat management practices and their impacts on habitat composition may be a cause of the differences observed in our focal populations of turtles. PMID:26417539
NASA Astrophysics Data System (ADS)
Shpynev, B. G.; Khabituev, D. S.
2014-11-01
Observations obtained using the Irkutsk Incoherent Scatter Radar (ISR) and GPS Total Electron Content (TEC) were used for estimation of the O+/H+ transition level and electron density distribution in the upper topside ionosphere and in the plasmasphere. We use a modified Chapman function where O+/H+ transition level is one of parameters to develop a model. On the base of this model we consider some examples of O+/H+ transition height dynamics and estimate the uncertainty of the method. We show that the transition height dynamics is very sensitive to parameters of neutral wind and it has specific variation on Irkutsk ISR site. The plasmasphere can contribute more than 50% to GPS TEC, and the input from plasmasphere can produce significant influence on GPS TEC variations.
NASA Astrophysics Data System (ADS)
Agbemava, Sylvester; Afanasjev, A. V.; Ray, D.; Ring, P.
2014-03-01
Covariant density functional theory is a modern theoretical tool for the description of nuclear structure phenomena. In this theory, the nucleus is described as a system of nucleons which interact by the exchange of various mesons. The goal of the current investigation is a first ever global assessment of the accuracy of the description of physical observables related to the ground state properties of even-even nuclei and establishing theoretical uncertainties in their description using the set of four modern covariant energy density functionals (CEDF) such as NL3*, DD-ME2, DD-ME ? and DD-PC1. Calculated binding energies, the deformations, radii and two-particle separation energies are compared in a systematic way with available experimental data. The comparison of theoretical results obtained with these CEDFs allows to establish theoretical uncertainties in the description of physical observables in known regions of nuclear chart and extrapolate them towards neutron-drip line. This work has been supported by the U.S. Department of Energy under the grant DE-FG02-07ER41459 and by the DFG cluster of excellence ``Origin and Structure of the Universe'' (www.universe-cluster.de).
NASA Astrophysics Data System (ADS)
Ramanathan, R.; Blakely, J. M.
1987-12-01
The amount of carbon adsorbed on the surface of Ni in contact with carbonaceous gas mixtures such as CH 4/H 2 and CO/CO 2, is estimated from equilibrium segregation data. The results are displayed on "gas composition versus temperature" plots for the above two gas mixtures. These plots provide basic thermodynamic information relevant to reactions such as steam reforming of hydrocarbons on supported Ni catalysts. For example, the plot for CO/CO 2 gas mixtures represents the Boudouard equilibrium on a single crystal Ni catalyst, whilst the plot for CH 4/H 2 gas mixtures provides information relevant to the equilibrium hydrogenation of adsorbed C to CH 4.
NASA Technical Reports Server (NTRS)
Melick, H. C., Jr.; Ybarra, A. H.; Bencze, D. P.
1975-01-01
An inexpensive method is developed to determine the extreme values of instantaneous inlet distortion. This method also provides insight into the basic mechanics of unsteady inlet flow and the associated engine reaction. The analysis is based on fundamental fluid dynamics and statistical methods to provide an understanding of the turbulent inlet flow and quantitatively relate the rms level and power spectral density (PSD) function of the measured time variant total pressure fluctuations to the strength and size of the low pressure regions. The most probable extreme value of the instantaneous distortion is then synthesized from this information in conjunction with the steady state distortion. Results of the analysis show the extreme values to be dependent upon the steady state distortion, the measured turbulence rms level and PSD function, the time on point, and the engine response characteristics. Analytical projections of instantaneous distortion are presented and compared with data obtained by a conventional, highly time correlated, 40 probe instantaneous pressure measurement system.
NASA Astrophysics Data System (ADS)
Liu, Yuwei; Dong, Ning; Fehler, Mike; Fang, Xinding; Liu, Xiwu
2015-06-01
Fractures in reservoirs significantly affect reservoir flow properties in subsequent years, which means that fracture characteristics such as preferred orientation, crack density or fracture compliance, what filling is in the fractures and so on are of great importance for reservoir development. When fractures are vertical, aligned and their dimensions are small relative to the seismic wavelength, the medium can be considered to be an equivalent horizontal transverse isotropic (HTI) medium. However, geophysical data acquired over naturally fractured reservoirs often reveal the presence of multiple fracture sets. We investigate a case where there are two vertical sets of fractures having differing length scales. One fracture set has length scale that is much smaller than the seismic wavelength but the other has length scale that is similar to the seismic wavelength. We use synthetic data to investigate the ability to infer the properties of the small-scale fractures in the presence of the large-scale fracture set. We invert for the Thomsen-type anisotropic coefficients of the small-scale fracture set by using the difference of the P wave amplitudes at two azimuths, which makes the inversion convex. Then we investigate the influence of the presence of the large-scale fractures on our ability to infer the properties of the small-scale fracture set. Surprisingly, we find that we can reliably infer the fracture density of the small-scale fractures even in the presence of large-scale fractures having significant compliance values. Although the inversion results for Thomsen-type anisotropic coefficients of small-scale fractures for one model are not good enough to figure out whether it is gas-filled or fluid-filled, we can find a big change of Thomsen-type anisotropic coefficient {{\\varepsilon}(V)} between the models in which small-scale fractures are filled with gas and fluid.
Wavelet Analysis for Wind Fields Estimation
Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.
2010-01-01
Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms?1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699
Wavelet analysis for wind fields estimation.
Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G
2010-01-01
Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
NASA Astrophysics Data System (ADS)
Spaleta, J.; Bristow, W. A.
2012-12-01
SuperDARN radars estimate plasma drift velocities from the Doppler shift observed on signals scattered from field-aligned density irregularities. The radars operate in the range of 8 MHz to 20 MHz and have ray paths covering a wide range of elevation angles, in order to maximize the range over which the scattering conditions are satisfied. Upward-propagating electromagnetic signals in this frequency range can be significantly refracted by the ionospheric plasma. The propagation paths of the refracted signals are bent earthward and at some point along this refracted path propagate perpendicular to the local magnetic field and scatter on the field-aligned density irregularities. The refraction results from gradients of the index of refraction in the ionospheric plasma. The index inside the ionosphere is lower than its free-space value, which depresses the measured line of sight velocity relative to the actual velocity of the plasma. One way to account for the depression of the measured velocity is to estimate the index of refraction in the scattering region by making multiple velocities measurements at different operating frequencies. Together with the appropriate plasma dispersion relations, multiple frequency measurements can be used to construct relations for the index of refraction, plasma density and the line of sight velocity correction factor as functions of frequency weighted measured velocity differences. Recent studies have used frequency-switching events spanning many days during traditional SuperDARN radar operation to build a statistical estimate for index of refraction, which is insensitive to the real-time spatial dynamics of the ionosphere. This statistical approach has motivated the development of a new mode of radar operation that provides simultaneous dual frequency measurements in order to resolve the temporal and spatial dynamics of the index of refraction calculations. Newly-developed multi-channel capabilities available in the SuperDARN radar control software now allow simultaneous dual frequency measurements at the McMurdo radar. This simultaneous dual frequency capability makes it possible, for the first time, to calculate real-time spatially resolved index of refraction and velocity correction factors across the radar field of view. Selected findings from the first several months of simultaneous dual frequency operation at the McMurdo SuperDARN radar will be presented.
NASA Astrophysics Data System (ADS)
Shpynev, Boris; Khabituev, Denis
2012-07-01
Experimental data obtained on Irkutsk Incoherent Scatter Radar (ISR) and GPS Total Electron Content (TEC) were used for estimation of the O+/H+ transition level and electron density distribution in the upper topside ionosphere and in the plasmasphere. As model description we use modified Chapman function where O+/H+ transition level is used as parameters. On the base of this model we considered the typical dynamics of O+/H+ transition level in different seasons and different geophysical conditions. This level is very sensitive to parameters of neutral wind and to conditions in geomagnetic field tube. The plasmasphere can contribute as much as 50% to GPS TEC, and the input from plasmasphere can produce the major influence on GPS TEC variations.
NASA Technical Reports Server (NTRS)
Tellers, T. E.
1980-01-01
An existing one-dimensional model of the annual water balance is reviewed. Slight improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate-soil system, is verified through comparisons with observed data and is employed in the annual water balance of watersheds in Clinton, Ma., and Santa Paula, Ca., to estimate effective areal average soil properties. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides excellent verification of the soil-selection procedure. This method of parameterization of the land surface should be useful with present global circulation models, enabling them to account for both the non-linearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.
D. W. Sciama
1997-03-11
We here update the derivation of precise values for the Hubble constant H_0, the age t_0 and the density parameter Omega*h^2 of the universe in the decaying neutrino theory for the ionisation of the interstellar medium (Sciama 1990 a, 1993). Using recent measurements of the temperature of the cosmic microwave background, of the abundances of D, He^4 and Li^7, and of the intergalactic hydrogen-ionising photon flux at zero red shift, we obtain for the density parameter of the universe Omega*h^2=0.300\\pm 0.003. Observed limits on H_0 and t_0 then imply that, for a zero cosmological constant, H_0=52.5\\pm 2.5 km. sec^{-1} Mpc^{-1}, t_0=12.7\\pm 0.7 Gyr and Omega=1.1\\pm 0.1. If Omega=1 exactly, then H_0=54.8\\pm 0.3 km. sec^{-1} Mpc^{-1}, and t_0=11.96\\pm 0.06 Gyr. These precise predictions of the decaying neutrino theory are compatible with current observational estimates of these quantities.
NASA Astrophysics Data System (ADS)
Karpenko, Iu. A.; Huovinen, P.; Petersen, H.; Bleicher, M.
2015-06-01
Hybrid approaches based on relativistic hydrodynamics and transport theory have been successfully applied for many years for the dynamical description of heavy-ion collisions at ultrarelativistic energies. In this work a new viscous hybrid model employing the hadron transport approach UrQMD for the early and late nonequilibrium stages of the reaction, and 3+1 dimensional viscous hydrodynamics for the hot and dense quark-gluon plasma stage, is introduced. This approach includes the equation of motion for finite baryon number and employs an equation of state with finite net-baryon density to allow for calculations in a large range of beam energies. The parameter space of the model is explored and constrained by comparison with the experimental data for bulk observables from Super Proton Synchrotron and the phase I beam energy scan at Relativistic Heavy Ion Collider. The favored parameter values depend on energy but allow extraction of the effective value of the shear viscosity coefficient over entropy density ratio ? /s in the fluid phase for the whole energy region under investigation. The estimated value of ? /s increases with decreasing collision energy, which may indicate that ? /s of the quark-gluon plasma depends on baryochemical potential ?B.
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-12-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.
Bastardie, Francois
2014-01-01
Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes zero observations and over-dispersion. The model utilises the fact the correlation between numbers of fish caught increases when the distance in space and time between the fish decreases, and the correlation between size groups in a haul increases when the difference in size decreases. Here the model is extended in two ways. Instead of assuming a natural scale size correlation, the model is further developed to allow for a transformed length scale. Furthermore, in the present application, the spatial- and size-dependent correlation between species was included. For cod (Gadus morhua) and whiting (Merlangius merlangus), a common structured size correlation was fitted, and a separable structure between the time and space-size correlation was found for each species, whereas more complex structures were required to describe the correlation between species (and space-size). The within-species time correlation is strong, whereas the correlations between the species are weaker over time but strong within the year. PMID:24911631
Irizarry, Rafael A.
and sleep. In the examples both direct and shruken estimates are computed. The approach is implemented, for time series data sets taken from musi cology, meteorology and sleep research, respectively. A basic goal is looking \\Lambda David R. Brillinger is Professor, Department of Statistics, University
NASA Astrophysics Data System (ADS)
Hosoi, Fumiki; Omasa, Kenji
2012-11-01
We used a high-resolution portable scanning lidar together with a lightweight mirror and a voxel-based canopy profiling method to estimate the vertical plant area density (PAD) profile of a rice (Oryza sativa L. cv. Koshihikari) canopy at different growth stages. To improve the laser's penetration of the dense canopy, we used a mirror to change the direction of the laser beam from horizontal to vertical (0°) and off-vertical (30°). The estimates of PAD and plant area index (PAI) were more accurate at 30° than at 0°. The root-mean-square errors of PAD at each growth stage ranged from 1.04 to 3.33 m2 m-3 at 0° and from 0.42 to 2.36 m2 m-3 at 30°, and those across all growth stages averaged 1.79 m2 m-3 at 0° and 1.52 m2 m-3 at 30°. The absolute percent errors of PAI at each growth stage ranged from 1.8% to 66.1% at 0° and from 4.3% to 23.2% at 30°, and those across all growth stages averaged 30.4% at 0° and 14.8% at 30°. The degree of laser beam coverage of the canopy (expressed as index ?) explained these errors. From the estimates of PAD at 30°, regressions between the areas of stems, leaves, and ears per unit ground area and actual dry weights gave standard errors of 7.9 g m-2 for ears and 12.2 g m-2 for stems and leaves.
Rasmussen, Teresa J.; Ziegler, Andrew C.; Rasmussen, Patrick P.
2005-01-01
The lower Kansas River is an important source of drinking water for hundreds of thousands of people in northeast Kansas. Constituents of concern identified by the Kansas Department of Health and Environment (KDHE) for streams in the lower Kansas River Basin include sulfate, chloride, nutrients, atrazine, bacteria, and sediment. Real-time continuous water-quality monitors were operated at three locations along the lower Kansas River from July 1999 through September 2004 to provide in-stream measurements of specific conductance, pH, water temperature, turbidity, and dissolved oxygen and to estimate concentrations for constituents of concern. Estimates of concentration and densities were combined with streamflow to calculate constituent loads and yields from January 2000 through December 2003. The Wamego monitoring site is located 44 river miles upstream from the Topeka monitoring site, which is 65 river miles upstream from the DeSoto monitoring site, which is 18 river miles upstream from where the Kansas River flows into the Missouri River. Land use in the Kansas River Basin is dominated by grassland and cropland, and streamflow is affected substantially by reservoirs. Water quality at the three monitoring sites varied with hydrologic conditions, season, and proximity to constituent sources. Nutrient and sediment concentrations and bacteria densities were substantially larger during periods of increased streamflow, indicating important contributions from nonpoint sources in the drainage basin. During the study period, pH remained well above the KDHE lower criterion of 6.5 standard units at all sites in all years, but exceeded the upper criterion of 8.5 standard units annually between 2 percent of the time (Wamego in 2001) and 65 percent of the time (DeSoto in 2003). The dissolved oxygen concentration was less than the minimum aquatic-life-support criterion of 5.0 milligrams per liter less than 1 percent of the time at all sites. Dissolved solids, a measure of the dissolved material in water, exceeded 500 milligrams per liter about one-half of the time at the three Kansas River sites. Larger dissolved-solids concentrations upstream likely were a result of water inflow from the highly mineralized Smoky Hill River that is diluted by tributary flow as it moves downstream. Concentrations of total nitrogen and total phosphorus at the three monitoring sites exceeded the ecoregion water-quality criteria suggested by the U.S. Environmental Protection Agency during the entire study period. Median nitrogen and phosphorus concentrations were similar at all three sites, and nutrient load increased moving from the upstream to downstream sites. Total nitrogen and total phosphorus yields were nearly the same from site to site indicating that nutrient sources were evenly distributed throughout the lower Kansas River Basin. About 11 percent of the total nitrogen load and 12 percent of the total phosphorus load at DeSoto during 2000-03 originated from wastewater-treatment facilities. Escherichia coli bacteria densities were largest at the middle site, Topeka. On average, 83 percent of the annual bacteria load at DeSoto during 2000-03 occurred during 10 percent of the time, primarily in conjunction with runoff. The average annual sediment loads at the middle and downstream monitoring sites (Topeka and DeSoto) were nearly double those at the upstream site (Wamego). The average annual sediment yield was largest at Topeka. On average, 64 percent of the annual suspended-sediment load at DeSoto during 2000-03 occurred during 10 percent of the time. Trapping of sediment by reservoirs located on contributing tributaries decreases transport of sediment and sediment-related constituents. The average annual suspended-sediment load in the Kansas River at DeSoto during 2000-03 was estimated at 1.66 million tons. An estimated 13 percent of this load consisted of sand-size particles, so approximately 216,000 tons of sand were transported
NASA Astrophysics Data System (ADS)
Suzuki, K.; Takayama, T.; Fujii, T.; Yamamoto, K.
2014-12-01
Many geologists have discussed slope instability caused by gas-hydrate dissociation, which could make movable fluid in pore space of sediments. However, physical property changes caused by gas hydrate dissociation would not be so simple. Moreover, during the period of natural gas-production from gas-hydrate reservoir applying depressurization method would be completely different phenomena from dissociation processes in nature, because it could not be caused excess pore pressure, even though gas and water exist. Hence, in all cases, physical properties of gas-hydrate bearing sediments and that of their cover sediments are quite important to consider this phenomena, and to carry out simulation to solve focusing phenomena during gas hydrate dissociation periods. Daini-Atsumi knoll that was the first offshore gas-production test site from gas-hydrate is partially covered by slumps. Fortunately, one of them was penetrated by both Logging-While-Drilling (LWD) hole and pressure-coring hole. As a result of LWD data analyses and core analyses, we have understood density structure of sediments from seafloor to Bottom Simulating Reflector (BSR). The results are mentioned as following. ?Semi-confined slump showed high-density, relatively. It would be explained by over-consolidation that was result of layer-parallel compression caused by slumping. ?Bottom sequence of slump has relative high-density zones. It would be explained by shear-induced compaction along slide plane. ?Density below slump tends to increase in depth. It is reasonable that sediments below slump deposit have been compacting as normal consolidation. ?Several kinds of log-data for estimating physical properties of gas-hydrate reservoir sediments have been obtained. It will be useful for geological model construction from seafloor until BSR. We can use these results to consider geological model not only for slope instability at slumping, but also for slope stability during depressurized period of gas production from gas-hydrate. AcknowledgementThis study was supported by funding from the Research Consortium for Methane Hydrate Resources in Japan (MH21 Research Consortium) planned by the Ministry of Economy, Trade and Industry (METI).
Potanin, E. P. Ustinov, A. L.
2013-06-15
The parameters of a calcium plasma source based on an electron cyclotron resonance (ECR) discharge were calculated. The analysis was performed as applied to an ion cyclotron resonance system designed for separation of calcium isotopes. The plasma electrons in the source were heated by gyrotron microwave radiation in the zone of the inhomogeneous magnetic field. It was assumed that, in such a combined trap, the energy of the extraordinary microwave propagating from the high-field side was initially transferred to a small group of resonance electrons. As a result, two electron components with different transverse temperatures-the hot resonance component and the cold nonresonance component-were created in the plasma. The longitudinal temperatures of both components were assumed to be equal. The entire discharge space was divided into a narrow ECR zone, where resonance electrons acquired transverse energy, and the region of the discharge itself, where the gas was ionized. The transverse energy of resonance electrons was calculated by solving the equations for electron motion in an inhomogeneous magnetic field. Using the law of energy conservation and the balance condition for the number of hot electrons entering the discharge zone and cooled due to ionization and elastic collisions, the density of hot electrons was estimated and the dependence of the longitudinal temperature T{sub e Parallel-To} of the main (cold) electron component on the energy fraction {beta} lost for radiation was obtained.
Massaro, F.; Funk, S.; D'Abrusco, R.; Paggi, A.; Smith, Howard A.; Masetti, N.; Giroletti, M.; Tosti, G.
2013-11-01
Nearly one-third of the ?-ray sources detected by Fermi are still unidentified, despite significant recent progress in this area. However, all of the ?-ray extragalactic sources associated in the second Fermi-LAT catalog have a radio counterpart. Motivated by this observational evidence, we investigate all the radio sources of the major radio surveys that lie within the positional uncertainty region of the unidentified ?-ray sources (UGSs) at a 95% level of confidence. First, we search for their infrared counterparts in the all-sky survey performed by the Wide-field Infrared Survey Explorer (WISE) and then we analyze their IR colors in comparison with those of the known ?-ray blazars. We propose a new approach, on the basis of a two-dimensional kernel density estimation technique in the single [3.4] – [4.6] – [12] ?m WISE color-color plot, replacing the constraint imposed in our previous investigations on the detection at 22 ?m of each potential IR counterpart of the UGSs with associated radio emission. The main goal of this analysis is to find distant ?-ray blazar candidates that, being too faint at 22 ?m, are not detected by WISE and thus are not selected by our purely IR-based methods. We find 55 UGSs that likely correspond to radio sources with blazar-like IR signatures. An additional 11 UGSs that have blazar-like IR colors have been found within the sample of sources found with deep recent Australia Telescope Compact Array observations.
NASA Astrophysics Data System (ADS)
Ngan, Henry Y. T.; Yung, Nelson H. C.; Yeh, Anthony G. O.
2015-02-01
This paper aims at presenting a comparative study of outlier detection (OD) for large-scale traffic data. The traffic data nowadays are massive in scale and collected in every second throughout any modern city. In this research, the traffic flow dynamic is collected from one of the busiest 4-armed junction in Hong Kong in a 31-day sampling period (with 764,027 vehicles in total). The traffic flow dynamic is expressed in a high dimension spatial-temporal (ST) signal format (i.e. 80 cycles) which has a high degree of similarities among the same signal and across different signals in one direction. A total of 19 traffic directions are identified in this junction and lots of ST signals are collected in the 31-day period (i.e. 874 signals). In order to reduce its dimension, the ST signals are firstly undergone a principal component analysis (PCA) to represent as (x,y)-coordinates. Then, these PCA (x,y)-coordinates are assumed to be conformed as Gaussian distributed. With this assumption, the data points are further to be evaluated by (a) a correlation study with three variant coefficients, (b) one-class support vector machine (SVM) and (c) kernel density estimation (KDE). The correlation study could not give any explicit OD result while the one-class SVM and KDE provide average 59.61% and 95.20% DSRs, respectively.
Batchelder, Kendra A.; Tanenbaum, Aaron B.; Albert, Seth; Guimond, Lyne; Kestener, Pierre; Arneodo, Alain; Khalil, Andre
2014-01-01
The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the “CC-MLO fractal dimension plot”, where a “fractal zone” and “Euclidean zones” (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue. PMID:25222610
Oesch, P. A.; Illingworth, G. D.; Magee, D.; Van Dokkum, P. G.; Momcheva, I.; Ashby, M. L. N.; Fazio, G. G.; Huang, J.-S.; Willner, S. P.; Gonzalez, V.; Trenti, M.; Brammer, G. B.; Skelton, R. E.; Spitler, L. R.
2014-05-10
We present the discovery of four surprisingly bright (H {sub 160} ? 26-27 mag AB) galaxy candidates at z ? 9-10 in the complete HST CANDELS WFC3/IR GOODS-N imaging data, doubling the number of z ? 10 galaxy candidates that are known, just ?500 Myr after the big bang. Two similarly bright sources are also detected in a reanalysis of the GOODS-S data set. Three of the four galaxies in GOODS-N are significantly detected at 4.5?-6.2? in the very deep Spitzer/IRAC 4.5 ?m data, as is one of the GOODS-S candidates. Furthermore, the brightest of our candidates (at z = 10.2 ± 0.4) is robustly detected also at 3.6 ?m (6.9?), revealing a flat UV spectral energy distribution with a slope ? = –2.0 ± 0.2, consistent with demonstrated trends with luminosity at high redshift. Thorough testing and use of grism data excludes known low-redshift contamination at high significance, including single emission-line sources, but as-yet unknown low redshift sources could provide an alternative solution given the surprising luminosity of these candidates. Finding such bright galaxies at z ? 9-10 suggests that the luminosity function for luminous galaxies might evolve in a complex way at z > 8. The cosmic star formation rate density still shows, however, an order-of-magnitude increase from z ? 10 to z ? 8 since the dominant contribution comes from low-luminosity sources. Based on the IRAC detections, we derive galaxy stellar masses at z ? 10, finding that these luminous objects are typically 10{sup 9} M {sub ?}. This allows for a first estimate of the cosmic stellar mass density at z ? 10 resulting in log{sub 10}??{sub ?}=4.7{sub ?0.8}{sup +0.5} M {sub ?} Mpc{sup –3} for galaxies brighter than M {sub UV} ? –18. The remarkable brightness, and hence luminosity, of these z ? 9-10 candidates will enable deep spectroscopy to determine their redshift and nature, and highlights the opportunity for the James Webb Space Telescope to map the buildup of galaxies at redshifts much earlier than z ? 10.
A robust method for estimating the multifractal wavelet spectrum in geophysical images
NASA Astrophysics Data System (ADS)
Nicolis, Orietta; Porro, Francesco
2013-04-01
The description of natural phenomena by an analysis of the statistical scaling laws is always a popular topic. Many studies aim to identify the fractal feature by estimating the self-similar parameter H, considered constant at different scales of observation. However, most real world data exhibit a multifractal structure, that is, the self-similarity parameter varies erratically with time. The multifractal spectrum provide an efficient tool for characterizing the scaling and singularity structures in signals and images, proving useful in numerous applications such as fluid dynamics, internet network traffic, finance, image analysis, texture synthesis, meteorology, and geophysics. In recent years, the multifractal formalism has been implemented with wavelets. The advantages of using the wavelet-based multifractal spectrum are: the availability of fast algorithms for wavelet transform, the locality of wavelet representations in both time and scale, and intrinsic dyadic self-similarity of basis functions. In this work we propose a robust Wavelet-based Multifractal Spectrum Estimator for the analysis of geophysical signals and satellite images. Finally, a simulation study and examples are considered to test the performances of the estimator.
Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Kontos, Despina
2013-12-15
Purpose: Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Studies suggest that the relative amount of fibroglandular (i.e., dense) tissue in the breast as quantified in MR images can be predictive of the risk for developing breast cancer, especially for high-risk women. Automated segmentation of the fibroglandular tissue and volumetric density estimation in breast MRI could therefore be useful for breast cancer risk assessment. Methods: In this work the authors develop and validate a fully automated segmentation algorithm, namely, an atlas-aided fuzzy C-means (FCM-Atlas) method, to estimate the volumetric amount of fibroglandular tissue in breast MRI. The FCM-Atlas is a 2D segmentation method working on a slice-by-slice basis. FCM clustering is first applied to the intensity space of each 2D MR slice to produce an initial voxelwise likelihood map of fibroglandular tissue. Then a prior learned fibroglandular tissue likelihood atlas is incorporated to refine the initial FCM likelihood map to achieve enhanced segmentation, from which the absolute volume of the fibroglandular tissue (|FGT|) and the relative amount (i.e., percentage) of the |FGT| relative to the whole breast volume (FGT%) are computed. The authors' method is evaluated by a representative dataset of 60 3D bilateral breast MRI scans (120 breasts) that span the full breast density range of the American College of Radiology Breast Imaging Reporting and Data System. The automated segmentation is compared to manual segmentation obtained by two experienced breast imaging radiologists. Segmentation performance is assessed by linear regression, Pearson's correlation coefficients, Student's pairedt-test, and Dice's similarity coefficients (DSC). Results: The inter-reader correlation is 0.97 for FGT% and 0.95 for |FGT|. When compared to the average of the two readers’ manual segmentation, the proposed FCM-Atlas method achieves a correlation ofr = 0.92 for FGT% and r = 0.93 for |FGT|, and the automated segmentation is not statistically significantly different (p = 0.46 for FGT% and p = 0.55 for |FGT|). The bilateral correlation between left breasts and right breasts for the FGT% is 0.94, 0.92, and 0.95 for reader 1, reader 2, and the FCM-Atlas, respectively; likewise, for the |FGT|, it is 0.92, 0.92, and 0.93, respectively. For the spatial segmentation agreement, the automated algorithm achieves a DSC of 0.69 ± 0.1 when compared to reader 1 and 0.61 ± 0.1 for reader 2, respectively, while the DSC between the two readers’ manual segmentation is 0.67 ± 0.15. Additional robustness analysis shows that the segmentation performance of the authors' method is stable both with respect to selecting different cases and to varying the number of cases needed to construct the prior probability atlas. The authors' results also show that the proposed FCM-Atlas method outperforms the commonly used two-cluster FCM-alone method. The authors' method runs at ?5 min for each 3D bilateral MR scan (56 slices) for computing the FGT% and |FGT|, compared to ?55 min needed for manual segmentation for the same purpose. Conclusions: The authors' method achieves robust segmentation and can serve as an efficient tool for processing large clinical datasets for quantifying the fibroglandular tissue content in breast MRI. It holds a great potential to support clinical applications in the future including breast cancer risk assessment.
NASA Astrophysics Data System (ADS)
Jankovi?, Bojan
2009-10-01
The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( ? = 1.07). It was also established that the Weibull distribution parameters ( ? and ?) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (?) range of 0.20 ? ? ? 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( ?). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(?) = ?0.18(1-?)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.
Tsai, JN; Uihlein, AV; Burnett-Bowie, SM; Neer, RM; Zhu, Y; Derrico, N; Lee, H; Bouxsein, ML; Leder, BZ
2014-01-01
Combined teriparatide and denosumab increases spine and hip bone mineral density more than either drug alone. The effect of this combination on skeletal microstructure and microarchitecture, however, is unknown. Because skeletal microstructure and microarchitecture are important components of skeletal integrity, we performed high-resolution peripheral QCT assessments at the distal tibia and radius in postmenopausal osteoporotic women randomized to receive teriparatide 20-µg daily (n=31), denosumab 60-mg every 6 months (n=33), or both (n=30) for 12 months. In the teriparatide group, total volumetric BMD (vBMD) did not change at either anatomic site but increased in both other groups at both sites. The increase in vBMD at the tibia was greater in the combination group (3.1±2.2%) than both the denosumab (2.2±1.9%) and teriparatide groups (?0.3±1.9%) (p<0.02 for both comparisons). Cortical vBMD decreased by 1.6±1.9% at the tibia and by 0.9±2.8% at the radius in the teriparatide group whereas it increased in both other groups at both sites. Tibia cortical vBMD increased more in the combination group (1.5±1.5%) than both monotherapy groups (p<0.04 for both comparisons). Cortical thickness did not change in the teriparatide group, but increased in both other groups. The increase in cortical thickness at the tibia was greater in the combination group (5.4±3.9%) than both monotherapy groups (p<0.01 for both comparisons). In the teriparatide group, radial cortical porosity increased by 20.9±37.6% and by 5.6±9.9% at the tibia but did not change in the other two groups. Bone stiffness and failure load, as estimated by finite element analysis, did not change in the teriparatide group but increased in the other two groups at both sites. Together, these findings suggest that the use of denosumab and teriparatide in combination improves HR-pQCT measures of bone quality more than either drug alone and may be of significant clinical benefit in the treatment of postmenopausal osteoporosis. PMID:25043459
Liu, Zhongming; Kecman, Fedja; He, Bin
2007-01-01
Objective Multimodal functional neuroimaging by combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) has been studied to achieve high-resolution reconstruction of the spatiotemporal cortical current density (CCD) distribution. However, mismatches between these two imaging modalities may occur due to their different underlying mechanisms. The aim of the present study is to investigate the effects of different types of fMRI-EEG mismatches, including fMRI invisible sources, fMRI extra regions and fMRI displacement, on the fMRI-constrained cortical imaging in a computer simulation based on realistic-geometry boundary-element-method (BEM) model. Methods Two methods have been adopted to integrate the synthetic fMRI and EEG data for CCD imaging. In addition to the well-known 90% fMRI-constrained Wiener filter approach (Liu AK, Belliveau JW and Dale AM, PNAS, 95: 8945–8950, 1998), we propose a novel two-step algorithm (referred to as “Twomey algorithm”) for fMRI-EEG integration. In the first step, a “hard” spatial prior derived from fMRI is imposed to solve the EEG inverse problem with a reduced source space; in the second step, the fMRI constraint is removed and the source estimate from the first step is re-entered as the initial guess of the desired solution into an EEG least squares fitting procedure with Twomey regularization. Twomey regularization is a modified Tikhonov technique that attempts to simultaneously minimize the distance between the desired solution and the initial estimate, and the residual errors of fitness to EEG data. The performance of the proposed Twomey algorithm has been evaluated both qualitatively and quantitatively along with the lead-field normalized minimum norm (WMN) and the 90% fMRI-weighted Wiener filter approach, under repeated and randomized source configurations. Point spread function (PSF) and localization error (LE) are used to measure the performance of different imaging approaches with or without a variety of fMRI-EEG mismatches. Results The results of the simulation show that the Twomey algorithm can successfully reduce the PSF of fMRI invisible sources compared to the Wiener estimation, without losing the merit of having much lower PSF of fMRI visible sources relative to the WMN solution. In addition, the existence of fMRI extra sources does not significantly affect the accuracy of the fMRI-EEG integrated CCD estimation for both the Wiener filter method and the proposed Twomey algorithm, while the Twomey algorithm may further reduce the chance of occurring spurious sources in the extra fMRI regions. The fMRI displacement away from the electrical source causes enlarged localization error in the imaging results of both the Twomey and Wiener approaches, while Twomey gives smaller LE than Wiener with the fMRI displacement ranging from 1-cm to 2-cm. With less than 2-cm fMRI displacement, the LEs for the Twomey and Wiener approaches are still smaller than in the WMN solution. Conclusions The present study suggests that the presence of fMRI invisible sources is the most problematic factor responsible for the error of fMRI-EEG integrated imaging based on the Wiener filter approach, whereas this approach is relatively robust against the fMRI extra regions and small displacement between fMRI activation and electrical current sources. While maintaining the above advantages possessed by the Wiener filter approach, the Twomey algorithm can further effectively alleviate the underestimation of fMRI invisible sources, suppress fMRI spurious sources and improve the robustness against fMRI displacement. Therefore, the Twomey algorithm is expected to improve the reliability of multimodal cortical source imaging against fMRI-EEG mismatches. Significance The proposed method promises to provide a useful alternative for multimodal neuroimaging integrating fMRI and EEG. PMID:16765085
Wavelet based analysis of circuit breaker operation
Ren, Zhifang Jennifer
2004-09-30
Circuit breaker is an important interrupting device in power system network. It usually has a lifetime about 20 to 40 years. During breaker's service time, maintenance and inspection are imperative duties to achieve its ...
Wavelet based Simulation of Reservoir Flow
NASA Astrophysics Data System (ADS)
Siddiqi, A. H.; Verma, A. K.; Noor-E-Zahra, Noor-E.-Zahra; Chandiok, Ashish; Hasan, A.
2009-07-01
Petroleum reservoirs consist of hydrocarbons and other chemicals trapped in the pores of a rock. The exploration and production of hydrocarbon reservoirs is still the most important technology to develop natural energy resources. Therefore, fluid flow simulators play a key role in order to help oil companies. In fact, simulation is the most important tool to model changes in a reservoir over the time. The main problem in petroleum reservoir simulation is to model the displacement of one fluid by another within a porous medium. A typical problem is characterized by the injection of a wetting fluid, for example water into the reservoir at a particular location displacing to the non wetting fluid, for example oil, which is extracted or produced at another location. Buckley-Leverett equation [1] models this process and its numerical simulation and visualization is of paramount importance. There are several numerical methods applied for numerical solution of partial differential equations modeling real world problems. We review in this paper the numerical solution of Buckley-Leverett equation for flat and non flat structures with special focus on wavelet method. We also indicate a few new avenues for further research.
Wavelet based detection of manatee vocalizations
NASA Astrophysics Data System (ADS)
Gur, Berke M.; Niezrecki, Christopher
2005-04-01
The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.
Wavelets based on splines: an application
NASA Astrophysics Data System (ADS)
Srinivasan, Pramila; Jamieson, Leah H.
1996-10-01
In this paper, we describe the theory and implementation of a variable rate speech coder using the cubic spline wavelet decomposition. In the discrete time wavelet extrema representation, Cvetkovic, et. al. implement an iterative projection algorithm to reconstruct the wavelet decomposition from the extrema representation. Based on this model, prior to this work, we have described a technique for speech coding using the extrema representation which suggests that the non-decimated extrema representation allows us to exploit the pitch redundancy in speech. A drawback of the above scheme is the audible perceptual distortion due to the iterative algorithm which fails to converge on some speech frames. This paper attempts to alleviate the problem by showing that for a particular class of wavelets that implements the ladder of spaces consisting of the splines, the iterative algorithm can be replaced by an interpolation procedure. Conditions under which the interpolation reconstructs the transform exactly are identified. One of the advantages of the extrema representation is the 'denoising' effect. A least squares technique to reconstruct the signal is constructed. The effectiveness of the scheme in reproducing significant details of the speech signal is illustrated using an example.
Wavelet-based Voice Morphing ORPHANIDOU C.,
Roberts, Stephen
as computer and video game applications with game heroes speaking with desired voices. A complete voice the original content. Many ongoing projects will benefit from the development of a successful voice morphing
The U.S.EPA has published recommendations for calibrator cell equivalent (CCE) densities of enterococci in recreational waters determined by a qPCR method in its 2012 Recreational Water Quality Criteria (RWQC). The CCE quantification unit stems from the calibration model used to ...
Value at risk estimation with entropy-based wavelet analysis in exchange markets
NASA Astrophysics Data System (ADS)
He, Kaijian; Wang, Lijun; Zou, Yingchao; Lai, Kin Keung
2014-08-01
In recent years, exchange markets are increasingly integrated together. Fluctuations and risks across different exchange markets exhibit co-moving and complex dynamics. In this paper we propose the entropy-based multivariate wavelet based approaches to analyze the multiscale characteristic in the multidimensional domain and improve further the Value at Risk estimation reliability. Wavelet analysis has been introduced to construct the entropy-based Multiscale Portfolio Value at Risk estimation algorithm to account for the multiscale dynamic correlation. The entropy measure has been proposed as the more effective measure with the error minimization principle to select the best basis when determining the wavelet families and the decomposition level to use. The empirical studies conducted in this paper have provided positive evidence as to the superior performance of the proposed approach, using the closely related Chinese Renminbi and European Euro exchange market.
NASA Astrophysics Data System (ADS)
Thampi, Smitha V.; Bagiya, Mala S.; Chakrabarty, D.; Acharya, Y. B.; Yamamoto, M.
2014-12-01
A GNU Radio Beacon Receiver (GRBR) system for total electron content (TEC) measurements using 150 and 400 MHz transmissions from Low-Earth Orbiting Satellites (LEOS) is fabricated in house and made operational at Ahmedabad (23.04°N, 72.54°E geographic, dip latitude 17°N) since May 2013. This system receives the 150 and 400 MHz transmissions from high-inclination LEOS. The first few days of observations are presented in this work to bring out the efficacy of an ensemble average method to convert the relative TECs to absolute TECs. This method is a modified version of the differential Doppler-based method proposed by de Mendonca (1962) and suitable even for ionospheric regions with large spatial gradients. Comparison of TECs derived from a collocated GPS receiver shows that the absolute TECs estimated by this method are reliable estimates over regions with large spatial gradient. This method is useful even when only one receiving station is available. The differences between these observations are discussed to bring out the importance of the spatial differences between the ionospheric pierce points of these satellites. A few examples of the latitudinal variation of TEC during different local times using GRBR measurements are also presented, which demonstrates the potential of radio beacon measurements in capturing the large-scale plasma transport processes in the low-latitude ionosphere.
Schubmehl, M.
1999-03-01
Temperature and density histories of direct-drive laser fusion implosions are important to an understanding of the reaction`s progress. Such measurements also document phenomena such as preheating of the core and improper compression that can interfere with the thermonuclear reaction. Model x-ray spectra from the non-LTE (local thermodynamic equilibrium) radiation transport post-processor for LILAC have recently been fitted to OMEGA data. The spectrum fitting code reads in a grid of model spectra and uses an iterative weighted least-squares algorithm to perform a fit to experimental data, based on user-input parameter estimates. The purpose of this research was to upgrade the fitting code to compute formal uncertainties on fitted quantities, and to provide temperature and density estimates with error bars. A standard error-analysis process was modified to compute these formal uncertainties from information about the random measurement error in the data. Preliminary tests of the code indicate that the variances it returns are both reasonable and useful.
Wavelet Based Methods in ImageWavelet Based Methods in Image ProcessingProcessing
Broughton, S. Allen
Convolution Theorem Â· Fourier transform takes convolution to pointwise multiplication Â· Eigenvector interpretation X YLecture 2.a -- Fourier TransformFourier Transform Applied Mathematics Seminar S. Allen Broughton 0 50 100 analysis & synthesis Â· Discrete Fourier transform - DFT Â· extension to 2D Â· convolution theorem
2014-01-01
Background The difficulty of accurately assessing LLIN use has led us to test electronic data logging motion detectors to provide quantitative data on household LLIN usage. Methods The main movements associated with an LLIN when appropriately used for malaria control were characterised under laboratory conditions. Data output from motion detectors attached to the LLINs associated with these specific movements were collated. In preliminary field studies in central Cote d’Ivoire, a pre-tested and validated questionnaire was used to identify the number of days householders claimed to have slept under LLINs. This information was compared to data downloaded from the motion detectors. Results Output data recording movement on the x, y, and z axes from the data loggers was consistently associated with the specific net movements. Recall of LLIN usage reported by questionnaires after a week was overestimated by 13.6%. This increased to 22.8% after 2 weeks and 38.7% after a month compared to information from the data loggers. Rates of LLIN use were positively correlated with An.gambiae s.s biting density (LRT?=?273.70; P?
NASA Astrophysics Data System (ADS)
Charbonneau, David; Harps-N Collaboration
2015-01-01
Although the NASA Kepler Mission has determined the physical sizes of hundreds of small planets, and we have in many cases characterized the star in detail, we know virtually nothing about the planetary masses: There are only 7 planets smaller than 2.5 Earth radii for which there exist published mass estimates with a precision better than 20 percent, the bare minimum value required to begin to distinguish between different models of composition.HARPS-N is an ultra-stable fiber-fed high-resolution spectrograph optimized for the measurement of very precise radial velocities. We have 80 nights of guaranteed time per year, of which half are dedicated to the study of small Kepler planets.In preparation for the 2014 season, we compared all available Kepler Objects of Interest to identify the ones for which our 40 nights could be used most profitably. We analyzed the Kepler light curves to constrain the stellar rotation periods, the lifetimes of active regions on the stellar surface, and the noise that would result in our radial velocities. We assumed various mass-radius relations to estimate the observing time required to achieve a mass measurement with a precision of 15%, giving preference to stars that had been well characterized through asteroseismology. We began by monitoring our long list of targets. Based on preliminary results we then selected our final short list, gathering typically 70 observations per target during summer 2014.These resulting mass measurements will have a signifcant impact on our understanding of these so-called super-Earths and small Neptunes. They would form a core dataset with which the international astronomical community can meaningfully seek to understand these objects and their formation in a quantitative fashion.HARPS-N was funded by the Swiss Space Office, the Harvard Origin of Life Initiative, the Scottish Universities Physics Alliance, the University of Geneva, the Smithsonian Astrophysical Observatory, the Italian National Astrophysical Institute, the University of St. Andrews, Queens University Belfast, and the University of Edinburgh. This work was made possible through a grant from the John Templeton Foundation.
Plonne, D.; Schlag, B.; Winkler, L.; Dargel, R. )
1990-05-01
To get insight into the low density lipoprotein (LDL)-apoB flux in the rat fetus near term and in the early postnatal period, homologous apoE-free 125I-labeled LDL was injected into the umbilical vein of the rat fetus immediately after Caesarean section. Since the serum LDL-apoB spontaneously declined after birth, a time-dependent two-pool model was used to calculate the flux rates in the neonate from the specific activities of LDL-apoB up to 15 h post partum. An approximate value of LDL-apoB flux in the fetus at birth was obtained by extrapolation of the kinetic data to the time of injection of the tracer. The data revealed that the turnover of LDL-apoB in the fetus (18.6 micrograms LDL-apoB/h per g body weight) exceeded that in the adult rat (0.4 microgram/h per g body weight) by at least one order of magnitude. Even 15 h after delivery, the LDL-apoB influx amounted to 2.5 micrograms/h per g body weight. The fractional catabolic rate of LDL-apoB in the fetus at term (0.39, h-1) slightly exceeded that in the adult animal (0.15, h-1) and reached the adult level within the first 3 h after birth and remained constant thereafter. In the rat fetus, LDL-apoB flux greatly exceeds that of VLDL-apoB. The data support the view of a direct synthesis and secretion of LDL, most probably by the fetal membranes.
Baby, S.; Hyeong, K.-E.; Lee, Y.-M.; Jung, J.-H.; Oh, D.-Y.; Nam, K.-C.; Kim, T. H.; Lee, H.-K.; Kim, J.-J.
2014-01-01
The accuracy of genomic estimated breeding values (GEBV) was evaluated for sixteen meat quality traits in a Berkshire population (n = 1,191) that was collected from Dasan breeding farm, Namwon, Korea. The animals were genotyped with the Illumina porcine 62 K single nucleotide polymorphism (SNP) bead chips, in which a set of 36,605 SNPs were available after quality control tests. Two methods were applied to evaluate GEBV accuracies, i.e. genome based linear unbiased prediction method (GBLUP) and Bayes B, using ASREML 3.0 and Gensel 4.0 software, respectively. The traits composed different sets of training (both genotypes and phenotypes) and testing (genotypes only) data. Under the GBLUP model, the GEBV accuracies for the training data ranged from 0.42±0.08 for collagen to 0.75±0.02 for water holding capacity with an average of 0.65±0.04 across all the traits. Under the Bayes B model, the GEBV accuracy ranged from 0.10±0.14 for National Pork Producers Council (NPCC) marbling score to 0.76±0.04 for drip loss, with an average of 0.49±0.10. For the testing samples, the GEBV accuracy had an average of 0.46±0.10 under the GBLUP model, ranging from 0.20±0.18 for protein to 0.65±0.06 for drip loss. Under the Bayes B model, the GEBV accuracy ranged from 0.04±0.09 for NPCC marbling score to 0.72±0.05 for drip loss with an average of 0.38±0.13. The GEBV accuracy increased with the size of the training data and heritability. In general, the GEBV accuracies under the Bayes B model were lower than under the GBLUP model, especially when the training sample size was small. Our results suggest that a much greater training sample size is needed to get better GEBV accuracies for the testing samples. PMID:25358312
Mineral deposit density; an update
Singer, Donald A.; Menzie, W. David; Sutphin, David M.; Mosier, Dan L.; Bliss, James D.; contributions to global mineral resource assessment research edited by Schulz, Klaus J.
2001-01-01
A robust method to estimate the number of undiscovered deposits is a form of mineral deposit model wherein numbers of deposits per unit area from well-explored regions are counted and the resulting frequency distribution is used either directly for an estimate or indirectly as a guideline in some other method. The 27 mineral deposit density estimates reported here for 13 different deposit types represent a start at compiling the estimates necessary to guide assessments.
Numerical errors in density functional calculations
Jansen, Henri J. F.
Numerical errors in density functional calculations H. J. F. Jansen Department of Physics, Oregon some useful results for an estimate of the error in density functional calculations when selfconsistency is not reached. This estimate provides a way to improve the prediction of a new charge density
Johnson, Benjamin L.; Schroeder, Michael E.; Wolfson, Tanya; Gamst, Anthony C.; Hamilton, Gavin; Shiehmorteza, Masoud; Loomba, Rohit; Schwimmer, Jeffrey B.; Reeder, Scott; Middleton, Michael S.; Sirlin, Claude B.
2013-01-01
Purpose To evaluate the effect of flip angle (FA) on accuracy and within-examination repeatability of hepatic proton-density fat fraction (PDFF) estimation with complex data-based magnetic resonance imaging (MRI). Materials and Methods PDFF was estimated at 3T in thirty subjects, using two sets of five MRI sequences with FA from 1° to 5° in each set. One set used 7ms repetition time and acquired 6 echoes (TR7/E6); the other used 14ms and acquired 12 echoes (TR14/E12). For each FA in both sets, the accuracy of MRI-PDFF was assessed relative to MR spectroscopy (MRS)-PDFF using four regression parameters (slope, intercept, average bias, R2). Each subject had four random sequences repeated; within-examination repeatability of MRI-PDFF for each FA was assessed with intraclass correlation coefficient (ICC). Pairwise comparisons were made using bootstrap-based tests. Results Most FAs provided high MRI-PDFF estimation accuracy (intercept range -1.25–0.84, slope 0.89–1.06, average bias 0.24–1.65, R2 0.85–0.97). Most comparisons of regression parameters between FAs were not significant. Informally, in the TR7/E6 set, FAs of 2° and 3° provided the highest accuracy, while FAs of 1° and 5° provided the lowest. In the TR14/E12 set, accuracy parameters did not differ consistently between FAs. FAs in both sets provided high within-examination repeatability (ICC range 0.981–0.998). Conclusion MRI-PDFF was repeatable and, for most FAs, accurate in both sequence sets. In the TR7/E6 sequence set, FAs of 2° and 3° informally provided the highest accuracy. In the TR14/E12 sequence set, all FAs provided similar accuracy. PMID:23596052
ERIC Educational Resources Information Center
Keiter, Richard L.; Puzey, Whitney L.; Blitz, Erin A.
2006-01-01
Metal rods of high purity for many elements are now commercially available and may be used to construct a display of relative densities. We have constructed a display with nine metal rods (Mg, Al, Ti, V, Fe, Cu, Ag, Pb, and W) of equal mass whose densities vary from 1.74 to 19.3 g cm[superscript -3]. The relative densities of the metals may be…
Adaptive Hausdorff Estimation of Density Level Sets
Nowak, Robert
to the symmetric set difference, G1G2 = (G1 \\ G2) (G2 \\ G1). For example, in [3, 13, 14, 16] a probability measure for very restricted classes of sets (e.g. the boundary fragment and star-shaped sets) that effectively
: Helmholtz machine estimation .
: Helmholtz machine density estimation . . : . . . (supervised learning) , (active learning) (query learning) [1, 3]. . (unsupervised learning), . , [5]. . Helmholtz machine , . Helmholtz machine : Helmholtz machine [2] . Helmholtz machine (generative network) (recognition network) . , , . Helmholtz machine (self
Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more »Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
NASA Astrophysics Data System (ADS)
Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.
2015-04-01
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.
Statistical density modification using local pattern matching
Terwilliger, Thomas C.
2007-01-23
A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
Pennings, Steven C.
of habitat types, plant biomass, and invertebrate densities in a Georgia salt marsh. Oceanography 26 types, plant Biomass, and invertebrate Densities in a georgia Salt marsh By J O h N F. S c h a l l eBStract. Salt marshes often contain remarkable spatial heterogeneity at multiple scales across the landscape
Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing
B. Olinger
2005-07-01
Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.
NASA Astrophysics Data System (ADS)
Bitenc, M.; Kieffer, D. S.; Khoshelham, K.
2015-08-01
The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.
Breast Density Analysis Using an Automatic Density Segmentation Algorithm.
Oliver, Arnau; Tortajada, Meritxell; Lladó, Xavier; Freixenet, Jordi; Ganau, Sergi; Tortajada, Lidia; Vilagran, Mariona; Sentís, Melcior; Martí, Robert
2015-10-01
Breast density is a strong risk factor for breast cancer. In this paper, we present an automated approach for breast density segmentation in mammographic images based on a supervised pixel-based classification and using textural and morphological features. The objective of the paper is not only to show the feasibility of an automatic algorithm for breast density segmentation but also to prove its potential application to the study of breast density evolution in longitudinal studies. The database used here contains three complete screening examinations, acquired 2 years apart, of 130 different patients. The approach was validated by comparing manual expert annotations with automatically obtained estimations. Transversal analysis of the breast density analysis of craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts acquired in the same study showed a correlation coefficient of ??=?0.96 between the mammographic density percentage for left and right breasts, whereas a comparison of both mammographic views showed a correlation of ??=?0.95. A longitudinal study of breast density confirmed the trend that dense tissue percentage decreases over time, although we noticed that the decrease in the ratio depends on the initial amount of breast density. PMID:25720749
Wavelet-Based Techniques for the Gamma-Ray Sky
McDermott, Samuel D; Cholis, Ilias; Lee, Samuel K
2015-01-01
We demonstrate how the image analysis technique of wavelet decomposition can be applied to the gamma-ray sky to separate emission on different angular scales. New structures on scales that differ from the scales of the conventional astrophysical foreground and background uncertainties can be robustly extracted, allowing a model-independent characterization with no presumption of exact signal morphology. As a test case, we generate mock gamma-ray data to demonstrate our ability to extract extended signals without assuming a fixed spatial template. For some point source luminosity functions, our technique also allows us to differentiate a diffuse signal in gamma-rays from dark matter annihilation and extended gamma-ray point source populations in a data-driven way.
WAVELET=BASED FUNCTIONAL MIXED MODEL ANALYSIS: COMPUTATION CONSIDERATIONS
Morris, Jeffrey S.
the brain or lung of 16 nude mice. 32 spectra were obtained from blood serum samples, modeled by 5 fixed of 147 nude mice. Spectra were obtained from blood serum of these mice as well as 37 controls. Design
Three-dimensional wavelets-based denoising of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Brook, Anna
2015-01-01
We propose a three-dimensional (3-D) denoising approach and coding scheme. The suggested denoising algorithm is taking full advantage of the supplied volumetric data by decomposing the original hyperspectral imagery into individual subspaces, applying an orthogonal isotropic 3-D divergence-free wavelet transformation. The delineated capability of hierarchically structured wavelet coefficients improves the efficiency of the suggested denoising algorithm and effectively preserves the finest details and the relevant image features by emphasizing a nonlocal similarity and spectral-spatial structure of hyperspectral imagery into sparse representation. The proposed method is evaluated using spectral angle distance for a ground-truth spectral dataset and by classification accuracies using water quality indices, which are particularly sensitive to the presence of noise. The reported results are based on a real dataset, presenting three different airborne hyperspectral systems: AHS, CASI-1500i, and AisaEAGLE. Several qualitative and quantitative evaluation measures are applied to validate the ability of the suggested method for noise reduction and image quality enhancement. Experimental results demonstrate that the proposed denoising algorithm achieves better performance when applied on the suggested wavelet transformation compared with other examined noise reduction and hyperspectral image restoration techniques.
Identification of structural damage using wavelet-based data classification
NASA Astrophysics Data System (ADS)
Koh, Bong-Hwan; Jeong, Min-Joong; Jung, Uk
2008-03-01
Predicted time-history responses from a finite-element (FE) model provide a baseline map where damage locations are clustered and classified by extracted damage-sensitive wavelet coefficients such as vertical energy threshold (VET) positions having large silhouette statistics. Likewise, the measured data from damaged structure are also decomposed and rearranged according to the most dominant positions of wavelet coefficients. Having projected the coefficients to the baseline map, the true localization of damage can be identified by investigating the level of closeness between the measurement and predictions. The statistical confidence of baseline map improves as the number of prediction cases increases. The simulation results of damage detection in a truss structure show that the approach proposed in this study can be successfully applied for locating structural damage even in the presence of a considerable amount of process and measurement noise.
Wavelet-Based Techniques for the Gamma-Ray Sky
Samuel D. McDermott; Patrick J. Fox; Ilias Cholis; Samuel K. Lee
2015-11-30
We demonstrate how the image analysis technique of wavelet decomposition can be applied to the gamma-ray sky to separate emission on different angular scales. New structures on scales that differ from the scales of the conventional astrophysical foreground and background uncertainties can be robustly extracted, allowing a model-independent characterization with no presumption of exact signal morphology. As a test case, we generate mock gamma-ray data to demonstrate our ability to extract extended signals without assuming a fixed spatial template. For some point source luminosity functions, our technique also allows us to differentiate a diffuse signal in gamma-rays from dark matter annihilation and extended gamma-ray point source populations in a data-driven way.
Fast Rendering of Foveated Volumes in Wavelet-based Representation
Chang, Ee-Chien
of resolu- tion. It can be efficiently represented in the wavelet domain by retaining a small number is sent to the client for rendering. This strategy re- quires very high bandwidth and resources at client ROI. Thus, it is acceptable to display objects in the ROI in full resolution, and omit details
Wavelet based hyperspectral image restoration using spatial and spectral penalties
NASA Astrophysics Data System (ADS)
Rasti, Behnood; Sveinsson, Johannes R.; Ulfarsson, Magnus O.; Benediktsson, Jon A.
2013-10-01
In this paper a penalized least squares cost function with a new spatial-spectral penalty is proposed for hyper- spectral image restoration. The new penalty is a combination of a Group LASSO (GLASSO) and First Order Roughness Penalty (FORP) in the wavelet domain. The restoration criterion is solved using the Alternative Direction Method of Multipliers (ADMM). The results are compared with other restoration methods where the proposed method outperforms them for the simulated noisy data set based on Signal to Noise Ratio (SNR) and visually outperforms them on a real degraded data set.
Wavelet-based segmentation for fetal ultrasound texture images
NASA Astrophysics Data System (ADS)
Zayed, Nourhan M.; Badawi, Ahmed M.; Elsayed, Alaa M.; Elsherif, Mohamed S.; Youssef, Abou-Bakr M.
2002-05-01
This paper introduces an efficient algorithm for segmentation of fetal ultrasound images using the multiresolution analysis technique. The proposed algorithm decomposes the input image into a multiresolution space using the packet two-dimensional wavelet transform. The system builds features vector for each pixel that contains information about the gray level, moments and other texture information. These vectors are used as inputs for the fuzzy c-means clustering method, which results in a segmented image whose regions are distinct from each other according to texture characteristic content. An Adaptive Center Weighted Median filter is used to enhance fetal ultrasound images before wavelet decomposition. Experiments indicate that this method can be applied with promising results. Preliminary experiments indicate good results in image segmentation while further studies are needed to investigate the potential of wavelet analysis and fuzzy c-means clustering methods as a tool for detecting fetus organs in digital ultrasound images.
Edge-Preserving Wavelet-Based Multisensor Image Fusion Approach
Ghouti, Lahouari
separately from plain and low activity image regions. This edge-guided fusion offers a trade-off between, there has been a growing interest in merging images obtained using multiple sensors in academia, industry is with the Information and Computer Science Department. King Fahd University of Petroleum and Minerals, Dhahran 31261
Wavelets-based clustering of air quality monitoring sites.
Gouveia, Sónia; Scotto, Manuel G; Monteiro, Alexandra; Alonso, Andres M
2015-11-01
This paper aims at providing a variance/covariance profile of a set of 36 monitoring stations measuring ozone (O 3) and nitrogen dioxide (NO 2) hourly concentrations, collected over the period 2005-2013, in Portugal mainland. The resulting individual profiles are embedded in a wavelet decomposition-based clustering algorithm in order to identify groups of stations exhibiting similar profiles. The results of the cluster analysis identify three groups of stations, namely urban, suburban/urban/rural, and a third group containing all but one rural stations. The results clearly indicate a geographical pattern among urban stations, distinguishing those located in Lisbon area from those located in Oporto/North. Furthermore, for urban stations, intra-diurnal and daily time scales exhibit the highest variance. This is due to the more relevant chemical activity occurring in high NO 2 emissions areas which are responsible for high variability on daily profiles. These chemical processes also explain the reason for NO 2 and O 3 being highly negatively cross-correlated in suburban and urban sites as compared with rural stations. Finally, the clustering analysis also identifies sites which need revision concerning classification according to environment/influence type. PMID:26483085
Wavelet based mobile video watermarking: spread spectrum vs. informed embedding
NASA Astrophysics Data System (ADS)
Mitrea, M.; Prêteux, F.; Du??, S.; Petrescu, M.
2005-11-01
The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.
A Face Recognition Scheme using Wavelet Based Dominant Features
Imtiaz, Hafiz
2011-01-01
In this paper, a multi-resolution feature extraction algorithm for face recognition is proposed based on two-dimensional discrete wavelet transform (2D-DWT), which efficiently exploits the local spatial variations in a face image. For the purpose of feature extraction, instead of considering the entire face image, an entropy-based local band selection criterion is developed, which selects high-informative horizontal segments from the face image. In order to capture the local spatial variations within these highinformative horizontal bands precisely, the horizontal band is segmented into several small spatial modules. Dominant wavelet coefficients corresponding to each local region residing inside those horizontal bands are selected as features. In the selection of the dominant coefficients, a threshold criterion is proposed, which not only drastically reduces the feature dimension but also provides high within-class compactness and high between-class separability. A principal component analysis is performed t...
Wavelet based similarity measurement algorithm for seafloor morphology
Darilmaz, ?lkay
2006-01-01
The recent expansion of systematic seafloor exploration programs such as geophysical research, seafloor mapping, search and survey, resource assessment and other scientific, commercial and military applications has created ...
Image superresolution of cytology images using wavelet based patch search
NASA Astrophysics Data System (ADS)
Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo
2015-01-01
Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.
ERIC Educational Resources Information Center
Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.
2009-01-01
Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…
The cosmological matter density
NASA Astrophysics Data System (ADS)
Davis, M.
2000-08-01
The status of observational cosmology is a subject that David Schramm followed intently. As spokesman for the entire field of particle astrophysics, David was interested in the full picture. He was always conversant with the latest developments in observations of the light elements, as they directly impacted his work on primordial nucleosynthesis and the resulting predicted abundances of deuterium, helium, and lithium. He was especially keen on knowing the status of the latest measurements of the cosmic density parameter, ?m, as a sufficiently high value, higher than that predicted for primordial nucleosynthesis, motivates the case for a non-baryonic component of dark matter. He had a deep interest in the phenomenology of large-scale structure, as this provides a powerful clue to the nature of the dark matter and the initial fluctuations generated in the early Universe. This review briefly summarizes current techniques for estimation of the density of the Universe. These estimates on a variety of physical scales yield generally consistent results, suggesting that the dark matter, apart from a possible smooth component, is well mixed with the galaxy distribution on large scales. A near consensus has emerged that the matter density of the Universe, ?m, is a factor of 3-4 less than required for closure. Measures of the amplitude and growth rate of structure in the local Universe are dependent on a degenerate combination of ?m and the bias /b in the observed galaxy distribution. The unknown bias in the galaxy distribution has been a persistent problem, but methods for breaking the degeneracy exist and are likely to be widely applied in the next several years.
Population Growth Change Population Size or Density
Caraco, Thomas
1 Population Growth Change Population Size or Density Model Measurable Quantity (Observable Unbounded Growth Useful: Small Population Ecological Invasion, Dynamics of Rarity #12;8 2. b ) Estimate Population Size: N Consider Single Population Assume Identical Individuals 1. Plants, Sessile
Further Developments in Orbit Ephemeris Derived Neutral Density
Locke, Travis Cole
2012-12-31
, and the existing atmospheric density models do not accurately model the variations in atmospheric density. In this research, precision orbit ephemerides (POE) are used as input measurements in an optimal orbit determination scheme in order to estimate corrections...
Cosmic deuterium and baryon density
Hogan, C J
1995-01-01
Quasar absorption lines now permit a direct probe of deuterium abundances in primordial material, with the best current estimate (D/H)=1.9\\pm 0.4 \\times 10^{-4}. If this is the universal primordial abundance (D/H)_p, Standard Big Bang Nucleosynthesis yields an estimate of the mean cosmic density of baryons, \\eta_{10}= 1.7\\pm 0.2 or \\Omega_b h^2= 6.2\\pm 0.8\\times 10^{-3}, leading to SBBN predictions in excellent agreement with estimates of primordial abundances of helium-4 and lithium-7. Lower values of (D/H)_p derived from Galactic chemical evolution models may instead be a sign of destruction of deuterium and helium-3 in stars. The inferred baryon density is compared with known baryons in stars and neutral gas; about two thirds of the baryons are in some still-unobserved form such as ionized gas or compact objects. Galaxy dynamical mass estimates reveal the need for primarily nonbaryonic dark matter in galaxy halos. Galaxy cluster dynamics imply that the total density of this dark matter, while twenty or mor...
Nuclear Energy Density Optimization
Kortelainen, Erno M; Lesinski, Thomas; More, J.; Nazarewicz, W.; Sarich, J.; Schunck, N.; Stoitsov, M. V.; Wild, S.
2010-01-01
We carry out state-of-the-art optimization of a nuclear energy density of Skyrme type in the framework of the Hartree-Fock-Bogoliubov (HFB) theory. The particle-hole and particle-particle channels are optimized simultaneously, and the experimental data set includes both spherical and deformed nuclei. The new model-based, derivative-free optimization algorithm used in this work has been found to be significantly better than standard optimization methods in terms of reliability, speed, accuracy, and precision. The resulting parameter set UNEDFpre results in good agreement with experimental masses, radii, and deformations and seems to be free of finite-size instabilities. An estimate of the reliability of the obtained parameterization is given, based on standard statistical methods. We discuss new physics insights offered by the advanced covariance analysis.