NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Wicke, Jason; Dumas, Genevieve A
2010-02-01
The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
A spatially explicit capture-recapture estimator for single-catch traps.
Distiller, Greg; Borchers, David L
2015-11-01
Single-catch traps are frequently used in live-trapping studies of small mammals. Thus far, a likelihood for single-catch traps has proven elusive and usually the likelihood for multicatch traps is used for spatially explicit capture-recapture (SECR) analyses of such data. Previous work found the multicatch likelihood to provide a robust estimator of average density. We build on a recently developed continuous-time model for SECR to derive a likelihood for single-catch traps. We use this to develop an estimator based on observed capture times and compare its performance by simulation to that of the multicatch estimator for various scenarios with nonconstant density surfaces. While the multicatch estimator is found to be a surprisingly robust estimator of average density, its performance deteriorates with high trap saturation and increasing density gradients. Moreover, it is found to be a poor estimator of the height of the detection function. By contrast, the single-catch estimators of density, distribution, and detection function parameters are found to be unbiased or nearly unbiased in all scenarios considered. This gain comes at the cost of higher variance. If there is no interest in interpreting the detection function parameters themselves, and if density is expected to be fairly constant over the survey region, then the multicatch estimator performs well with single-catch traps. However if accurate estimation of the detection function is of interest, or if density is expected to vary substantially in space, then there is merit in using the single-catch estimator when trap saturation is above about 60%. The estimator's performance is improved if care is taken to place traps so as to span the range of variables that affect animal distribution. As a single-catch likelihood with unknown capture times remains intractable for now, researchers using single-catch traps should aim to incorporate timing devices with their traps.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
Trunk density profile estimates from dual X-ray absorptiometry.
Wicke, Jason; Dumas, Geneviève A; Costigan, Patrick A
2008-01-01
Accurate body segment parameters are necessary to estimate joint loads when using biomechanical models. Geometric methods can provide individualized data for these models but the accuracy of the geometric methods depends on accurate segment density estimates. The trunk, which is important in many biomechanical models, has the largest variability in density along its length. Therefore, the objectives of this study were to: (1) develop a new method for modeling trunk density profiles based on dual X-ray absorptiometry (DXA) and (2) develop a trunk density function for college-aged females and males that can be used in geometric methods. To this end, the density profiles of 25 females and 24 males were determined by combining the measurements from a photogrammetric method and DXA readings. A discrete Fourier transformation was then used to develop the density functions for each sex. The individual density and average density profiles compare well with the literature. There were distinct differences between the profiles of two of participants (one female and one male), and the average for their sex. It is believed that the variations in these two participants' density profiles were a result of the amount and distribution of fat they possessed. Further studies are needed to support this possibility. The new density functions eliminate the uniform density assumption associated with some geometric models thus providing more accurate trunk segment parameter estimates. In turn, more accurate moments and forces can be estimated for the kinetic analyses of certain human movements.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
The multicategory case of the sequential Bayesian pixel selection and estimation procedure
NASA Technical Reports Server (NTRS)
Pore, M. D.; Dennis, T. B. (Principal Investigator)
1980-01-01
A Bayesian technique for stratified proportion estimation and a sampling based on minimizing the mean squared error of this estimator were developed and tested on LANDSAT multispectral scanner data using the beta density function to model the prior distribution in the two-class case. An extention of this procedure to the k-class case is considered. A generalization of the beta function is shown to be a density function for the general case which allows the procedure to be extended.
Tigers and their prey: Predicting carnivore densities from prey abundance
Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Link, W.A.; Hines, J.E.
2004-01-01
The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and their resources has proven extremely difficult, and examples of such predictive ability are very rare. Ecological studies of prey requirements of tigers Panthera tigris led us to develop a simple mechanistic model for predicting tiger density as a function of prey density. We tested our model using data from a landscape-scale long-term (1995-2003) field study that estimated tiger and prey densities in 11 ecologically diverse sites across India. We used field techniques and analytical methods that specifically addressed sampling and detectability, two issues that frequently present problems in macroecological studies of animal populations. Estimated densities of ungulate prey ranged between 5.3 and 63.8 animals per km2. Estimated tiger densities (3.2-16.8 tigers per 100 km2) were reasonably consistent with model predictions. The results provide evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions. In addition to generating important insights into carnivore ecology and conservation, the study provides a potentially useful model for the rigorous conduct of macroecological science.
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
mBEEF-vdW: Robust fitting of error estimation density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...
2016-06-15
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
NASA Astrophysics Data System (ADS)
Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.
2018-01-01
Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.
Density estimation using the trapping web design: A geometric analysis
Link, W.A.; Barker, R.J.
1994-01-01
Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.
Spread of Epidemic on Complex Networks Under Voluntary Vaccination Mechanism
NASA Astrophysics Data System (ADS)
Xue, Shengjun; Ruan, Feng; Yin, Chuanyang; Zhang, Haifeng; Wang, Binghong
Under the assumption that the decision of vaccination is a voluntary behavior, in this paper, we use two forms of risk functions to characterize how susceptible individuals estimate the perceived risk of infection. One is uniform case, where each susceptible individual estimates the perceived risk of infection only based on the density of infection at each time step, so the risk function is only a function of the density of infection; another is preferential case, where each susceptible individual estimates the perceived risk of infection not only based on the density of infection but only related to its own activities/immediate neighbors (in network terminology, the activity or the number of immediate neighbors is the degree of node), so the risk function is a function of the density of infection and the degree of individuals. By investigating two different ways of estimating the risk of infection for susceptible individuals on complex network, we find that, for the preferential case, the spread of epidemic can be effectively controlled; yet, for the uniform case, voluntary vaccination mechanism is almost invalid in controlling the spread of epidemic on networks. Furthermore, given the temporality of some vaccines, the waves of epidemic for two cases are also different. Therefore, our work insight that the way of estimating the perceived risk of infection determines the decision on vaccination options, and then determines the success or failure of control strategy.
mBEEF-vdW: Robust fitting of error estimation density functionals
NASA Astrophysics Data System (ADS)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas
2016-06-01
We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.
Statistics of some atmospheric turbulence records relevant to aircraft response calculations
NASA Technical Reports Server (NTRS)
Mark, W. D.; Fischer, R. W.
1981-01-01
Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.
Estimating effective data density in a satellite retrieval or an objective analysis
NASA Technical Reports Server (NTRS)
Purser, R. J.; Huang, H.-L.
1993-01-01
An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len
2016-01-01
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.
NASA Astrophysics Data System (ADS)
Freeman, P. E.; Izbicki, R.; Lee, A. B.
2017-07-01
Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
NASA Astrophysics Data System (ADS)
Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe
2016-08-01
In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.
Density Estimation with Mercer Kernels
NASA Technical Reports Server (NTRS)
Macready, William G.
2003-01-01
We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.
2002-01-01
The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.
Boris Zeide
2004-01-01
Estimation of stand density is based on a relationship between number of trees and their average diameter in fully stocked stands. Popular measures of density (Reinekeâs stand density index and basal area) assume that number of trees decreases as a power function of diameter. Actually, number of trees drops faster than predicted by the power function because the number...
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Multivariate Density Estimation and Remote Sensing
NASA Technical Reports Server (NTRS)
Scott, D. W.
1983-01-01
Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.
Effects of stand density on top height estimation for ponderosa pine
Martin Ritchie; Jianwei Zhang; Todd Hamilton
2012-01-01
Site index, estimated as a function of dominant-tree height and age, is often used as an expression of site quality. This expression is assumed to be effectively independent of stand density. Observation of dominant height at two different ponderosa pine levels-of-growing-stock studies revealed that top height stability with respect to stand density depends on the...
NASA Astrophysics Data System (ADS)
Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.
2014-11-01
We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.
Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.
Joshi, Niranjan; Kadir, Timor; Brady, Michael
2011-08-01
Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
Investigation of the Specht density estimator
NASA Technical Reports Server (NTRS)
Speed, F. M.; Rydl, L. M.
1971-01-01
The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.
A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.
Carreau, Julie; Bengio, Yoshua
2009-07-01
In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.
Wood density-moisture profiles in old-growth Douglas-fir and western hemlock.
W.Y. Pong; Dale R. Waddell; Lambert Michael B.
1986-01-01
Accurate estimation of the weight of each load of logs is necessary for safe and efficient aerial logging operations. The prediction of green density (lb/ft3) as a function of height is a critical element in the accurate estimation of tree bole and log weights. Two sampling methods, disk and increment core (Bergstrom xylodensimeter), were used to measure the density-...
Econometric studies of urban population density: a survey.
Mcdonald, J F
1989-01-01
This paper presents the 1st reasonably comprehensive survey of empirical research of urban population densities since the publication of the book by Edmonston in 1975. The survey summarizes contributions to empirical knowledge that have been made since 1975 and points toward possible areas for additional research. The paper also provides a brief interpretative intellectual history of the topic. It begins with a personal overview of research in the field. The next section discusses econometric issues that arise in the estimation of population density functions in which density is a function only of a distance to the central business district of the urban area. Section 4 summarizes the studies of a single urban area that went beyond the estimation of simple distance-density functions, and Section 5 discusses studies that sought to explain the variations across urban areas in population density patterns. McDonald refers to the standard theory of urban population density throughout the paper. This basic model is presented in the textbook by Mills and Hamilton and it is assumed that the reader is familiar with the model.
Katsevich, Alexander J.; Ramm, Alexander G.
1996-01-01
Local tomography is enhanced to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. In a first method for evaluating the value of the discontinuity, the relative attenuation data is inputted to a local tomography function .function..sub..LAMBDA. to define the location S of the density discontinuity. The asymptotic behavior of .function..sub..LAMBDA. is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA.. In a second method for evaluating the value of the discontinuity, a gradient value for a mollified local tomography function .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) is determined along the discontinuity; and the value of the jump of the density across the discontinuity curve (or surface) S is estimated from the gradient values.
NASA Astrophysics Data System (ADS)
Brauer, Uwe; Karp, Lavi
2018-01-01
Local existence and well posedness for a class of solutions for the Euler Poisson system is shown. These solutions have a density ρ which either falls off at infinity or has compact support. The solutions have finite mass, finite energy functional and include the static spherical solutions for γ = 6/5. The result is achieved by using weighted Sobolev spaces of fractional order and a new non-linear estimate which allows to estimate the physical density by the regularised non-linear matter variable. Gamblin also has studied this setting but using very different functional spaces. However we believe that the functional setting we use is more appropriate to describe a physical isolated body and more suitable to study the Newtonian limit.
A Non-Parametric Probability Density Estimator and Some Applications.
1984-05-01
distributions, which are assumed to be representa- tive of platykurtic , mesokurtic, and leptokurtic distribu- tions in general. The dissertation is... platykurtic distributions. Consider, for example, the uniform distribution shown in Figure 4. 34 o . 1., Figure 4 -Sensitivity to Support Estimation The...results of the density function comparisons indicate that the new estimator is clearly -Z superior for platykurtic distributions, equal to the best 59
DOE Office of Scientific and Technical Information (OSTI.GOV)
Astaf'ev, S. B., E-mail: webmaster@ns.crys.ras.ru; Shchedrin, B. M.; Yanusova, L. G.
The possibility of estimating the layered film structural parameters by constructing the autocorrelation function P{sub F}(z) (referred to as the Patterson differential function) for the derivative d{rho}/dz of electron density along the normal to the sample surface has been considered. An analytical expression P{sub F}(z) is presented for a multilayered film within the box model of the electron density profile. The possibilities of selecting structural information about layered films by analyzing the features of this function are demonstrated by model and real examples, in particular, by applying the method of shifted systems of peaks for the function P{sub F}(z).
Brown, Sandra [University of Illinois, Urbana, Illinois (USA); Iverson, Louis R. [University of Illinois, Urbana, Illinois (USA); Prasad, Anantha [University of Illinois, Urbana, Illinois (USA); Beaty, Tammy W. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Olsen, Lisa M. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Cushman, Robert M. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Brenkert, Antoinette L. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA)
2001-03-01
A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam.
Analysing designed experiments in distance sampling
Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block
2009-01-01
Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
Three statistical models for estimating length of stay.
Selvin, S
1977-01-01
The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program. PMID:914532
Three statistical models for estimating length of stay.
Selvin, S
1977-01-01
The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
The detectability of brown dwarfs - Predictions and uncertainties
NASA Technical Reports Server (NTRS)
Nelson, L. A.; Rappaport, S.; Joss, P. C.
1993-01-01
In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Electrostatics of DNA-Functionalized Nanoparticles
NASA Astrophysics Data System (ADS)
Hoffmann, Kyle; Krishnamoorthy, Kurinji; Kewalramani, Sumit; Bedzyk, Michael; Olvera de La Cruz, Monica
DNA-functionalized nanoparticles have applications in directed self-assembly and targeted cellular delivery of therapeutic proteins. In order to design specific systems, it is necessary to understand their self-assembly properties, of which the long-range electrostatic interactions are a critical component. We iteratively solved equations derived from classical density functional theory in order to predict the distribution of ions around DNA-functionalized Cg Catalase. We then compared estimates of the resonant intensity to those from SAXS measurements to estimate key features of DNA-functionalized proteins, such as the size of the region linking the protein and DNA and the extension of the single-stranded DNA. Using classical density functional theory and coarse-grained simulations, we are able to predict and understand these fundamental properties in order to rationally design new biomaterials.
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
NASA Technical Reports Server (NTRS)
Smith, Andrew; LaVerde, Bruce; Jones, Douglas; Towner, Robert; Waldon, James; Hunt, Ron
2013-01-01
Producing fluid structural interaction estimates of panel vibration from an applied pressure field excitation are quite dependent on the spatial correlation of the pressure field. There is a danger of either over estimating a low frequency response or under predicting broad band panel response in the more modally dense bands if the pressure field spatial correlation is not accounted for adequately. It is a useful practice to simulate the spatial correlation of the applied pressure field over a 2d surface using a matrix of small patch area regions on a finite element model (FEM). Use of a fitted function for the spatial correlation between patch centers can result in an error if the choice of patch density is not fine enough to represent the more continuous spatial correlation function throughout the intended frequency range of interest. Several patch density assumptions to approximate the fitted spatial correlation function are first evaluated using both qualitative and quantitative illustrations. The actual response of a typical vehicle panel system FEM is then examined in a convergence study where the patch density assumptions are varied over the same model. The convergence study results illustrate the impacts possible from a poor choice of patch density on the analytical response estimate. The fitted correlation function used in this study represents a diffuse acoustic field (DAF) excitation of the panel to produce vibration response.
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
USDA-ARS?s Scientific Manuscript database
In this study density functional theory (DFT) was used to study the adsorption of guaiacol and its initial hydrodeoxygenation (HDO) reactions on Pt(111). Previously reported Brønsted–Evans–Polanyi (BEP) correlations for small open chain molecules are found to be inadequate in estimating the reaction...
Estimating the densities of benzene-derived explosives using atomic volumes.
Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka
2018-02-09
The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.
NASA Astrophysics Data System (ADS)
Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai
2018-05-01
The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
NASA Astrophysics Data System (ADS)
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
A Balanced Approach to Adaptive Probability Density Estimation.
Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy
2017-01-01
Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.
Unbiased estimators for spatial distribution functions of classical fluids
NASA Astrophysics Data System (ADS)
Adib, Artur B.; Jarzynski, Christopher
2005-01-01
We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.
NASA Technical Reports Server (NTRS)
Smith, Andrew; LaVerde, Bruce; Jones, Douglas; Towner, Robert; Hunt, Ron
2013-01-01
Fluid structural interaction problems that estimate panel vibration from an applied pressure field excitation are quite dependent on the spatial correlation of the pressure field. There is a danger of either over estimating a low frequency response or under predicting broad band panel response in the more modally dense bands if the pressure field spatial correlation is not accounted for adequately. Even when the analyst elects to use a fitted function for the spatial correlation an error may be introduced if the choice of patch density is not fine enough to represent the more continuous spatial correlation function throughout the intended frequency range of interest. Both qualitative and quantitative illustrations evaluating the adequacy of different patch density assumptions to approximate the fitted spatial correlation function are provided. The actual response of a typical vehicle panel system is then evaluated in a convergence study where the patch density assumptions are varied over the same finite element model. The convergence study results are presented illustrating the impact resulting from a poor choice of patch density. The fitted correlation function used in this study represents a Diffuse Acoustic Field (DAF) excitation of the panel to produce vibration response.
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
On the mean radiative efficiency of accreting massive black holes in AGNs and QSOs
NASA Astrophysics Data System (ADS)
Zhang, XiaoXia; Lu, YouJun
2017-10-01
Radiative efficiency is an important physical parameter that describes the fraction of accretion material converted to radiative energy for accretion onto massive black holes (MBHs). With the simplest Sołtan argument, the radiative efficiency of MBHs can be estimated by matching the mass density of MBHs in the local universe to the accreted mass density by MBHs during AGN/QSO phases. In this paper, we estimate the local MBH mass density through a combination of various determinations of the correlations between the masses of MBHs and the properties of MBH host galaxies, with the distribution functions of those galaxy properties. We also estimate the total energy density radiated by AGNs and QSOs by using various AGN/QSO X-ray luminosity functions in the literature. We then obtain several hundred estimates of the mean radiative efficiency of AGNs/QSOs. Under the assumption that those estimates are independent of each other and free of systematic effects, we apply the median statistics as described by Gott et al. and find the mean radiative efficiency of AGNs/QSOs is ɛ = 0.105 -0.008 +0.006 , which is consistent with the canonical value 0.1. Considering that about 20% Compton-thick objects may be missed from current available X-ray surveys, the true mean radiative efficiency may be actually 0.12.
Is Bayesian Estimation Proper for Estimating the Individual's Ability? Research Report 80-3.
ERIC Educational Resources Information Center
Samejima, Fumiko
The effect of prior information in Bayesian estimation is considered, mainly from the standpoint of objective testing. In the estimation of a parameter belonging to an individual, the prior information is, in most cases, the density function of the population to which the individual belongs. Bayesian estimation was compared with maximum likelihood…
Alternative Determination of Density of the Titan Atmosphere
NASA Technical Reports Server (NTRS)
Lee, Allan; Brown, Jay; Feldman, Antonette; Peer, Scott; Wamg. Eric
2009-01-01
An alternative has been developed to direct measurement for determining the density of the atmosphere of the Saturn moon Titan as a function of altitude. The basic idea is to deduce the density versus altitude from telemetric data indicative of the effects of aerodynamic torques on the attitude of the Cassini Saturn orbiter spacecraft as it flies past Titan at various altitudes. The Cassini onboard attitude-control software includes a component that can estimate three external per-axis torques exerted on the spacecraft. These estimates are available via telemetry.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R
2017-02-14
Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.
Balabin, Roman M; Lomakina, Ekaterina I
2009-08-21
Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6+/-0.2 kcal mol(-1). In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.
Lee, Sanghun; Park, Sung Soo
2011-11-03
Dielectric constants of electrolytic organic solvents are calculated employing nonpolarizable Molecular Dynamics simulation with Electronic Continuum (MDEC) model and Density Functional Theory. The molecular polarizabilities are obtained by the B3LYP/6-311++G(d,p) level of theory to estimate high-frequency refractive indices while the densities and dipole moment fluctuations are computed using nonpolarizable MD simulations. The dielectric constants reproduced from these procedures are evaluated to provide a reliable approach for estimating the experimental data. An additional feature, two representative solvents which have similar molecular weights but are different dielectric properties, i.e., ethyl methyl carbonate and propylene carbonate, are compared using MD simulations and the distinctly different dielectric behaviors are observed at short times as well as at long times.
The mean density and two-point correlation function for the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1988-01-01
The effect of large-scale inhomogeneities on the determination of the mean number density and the two-point spatial correlation function were investigated for two complete slices of the extension of the Center for Astrophysics (CfA) redshift survey (de Lapparent et al., 1986). It was found that the mean galaxy number density for the two strips is uncertain by 25 percent, more so than previously estimated. The large uncertainty in the mean density introduces substantial uncertainty in the determination of the two-point correlation function, particularly at large scale; thus, for the 12-deg slice of the CfA redshift survey, the amplitude of the correlation function at intermediate scales is uncertain by a factor of 2. The large uncertainties in the correlation functions might reflect the lack of a fair sample.
The 5-10 keV AGN luminosity function at 0.01 < z < 4.0
NASA Astrophysics Data System (ADS)
Fotopoulou, S.; Buchner, J.; Georgantopoulos, I.; Hasinger, G.; Salvato, M.; Georgakakis, A.; Cappelluti, N.; Ranalli, P.; Hsu, L. T.; Brusa, M.; Comastri, A.; Miyaji, T.; Nandra, K.; Aird, J.; Paltani, S.
2016-03-01
The active galactic nuclei (AGN) X-ray luminosity function traces actively accreting supermassive black holes and is essential for the study of the properties of the AGN population, black hole evolution, and galaxy-black hole coevolution. Up to now, the AGN luminosity function has been estimated several times in soft (0.5-2 keV) and hard X-rays (2-10 keV). AGN selection in these energy ranges often suffers from identification and redshift incompleteness and, at the same time, photoelectric absorption can obscure a significant amount of the X-ray radiation. We estimate the evolution of the luminosity function in the 5-10 keV band, where we effectively avoid the absorbed part of the spectrum, rendering absorption corrections unnecessary up to NH ~ 1023 cm-2. Our dataset is a compilation of six wide, and deep fields: MAXI, HBSS, XMM-COSMOS, Lockman Hole, XMM-CDFS, AEGIS-XD, Chandra-COSMOS, and Chandra-CDFS. This extensive sample of ~1110 AGN (0.01 < z < 4.0, 41 < log Lx < 46) is 98% redshift complete with 68% spectroscopic redshifts. For sources lacking a spectroscopic redshift estimation we use the probability distribution function of photometric redshift estimation specifically tuned for AGN, and a flat probability distribution function for sources with no redshift information. We use Bayesian analysis to select the best parametric model from simple pure luminosity and pure density evolution to more complicated luminosity and density evolution and luminosity-dependent density evolution (LDDE). We estimate the model parameters that describe best our dataset separately for each survey and for the combined sample. We show that, according to Bayesian model selection, the preferred model for our dataset is the LDDE. Our estimation of the AGN luminosity function does not require any assumption on the AGN absorption and is in good agreement with previous works in the 2-10 keV energy band based on X-ray hardness ratios to model the absorption in AGN up to redshift three. Our sample does not show evidence of a rapid decline of the AGN luminosity function up to redshift four.
Ways to improve your correlation functions
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.
Locality of correlation in density functional theory.
Burke, Kieron; Cancio, Antonio; Gould, Tim; Pittalis, Stefano
2016-08-07
The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that EC → -AC ZlnZ + BCZ as Z → ∞, where Z is the atomic number, AC is known, and we estimate BC to be about 37 mhartree. The local density approximation yields AC exactly, but a very incorrect value for BC, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with BC a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed.
Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions.
Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z; Gao, Xin
2017-01-01
Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.
Detectability of auditory signals presented without defined observation intervals
NASA Technical Reports Server (NTRS)
Watson, C. S.; Nichols, T. L.
1976-01-01
Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.
NASA Astrophysics Data System (ADS)
Guo, Xinwei; Qu, Zexing; Gao, Jiali
2018-01-01
The multi-state density functional theory (MSDFT) provides a convenient way to estimate electronic coupling of charge transfer processes based on a diabatic representation. Its performance has been benchmarked against the HAB11 database with a mean unsigned error (MUE) of 17 meV between MSDFT and ab initio methods. The small difference may be attributed to different representations, diabatic from MSDFT and adiabatic from ab initio calculations. In this discussion, we conclude that MSDFT provides a general and efficient way to estimate the electronic coupling for charge-transfer rate calculations based on the Marcus-Hush model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rupšys, P.
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Measurements of surface-pressure fluctuations on the XB-70 airplane at local Mach numbers up to 2.45
NASA Technical Reports Server (NTRS)
Lewis, T. L.; Dods, J. B., Jr.; Hanly, R. D.
1973-01-01
Measurements of surface-pressure fluctuations were made at two locations on the XB-70 airplane for nine flight-test conditions encompassing a local Mach number range from 0.35 to 2.45. These measurements are presented in the form of estimated power spectral densities, coherence functions, and narrow-band-convection velocities. The estimated power spectral densities compared favorably with wind-tunnel data obtained by other experimenters. The coherence function and convection velocity data supported conclusions by other experimenters that low-frequency surface-pressure fluctuations consist of small-scale turbulence components with low convection velocity.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Gao, Nuo; Zhu, S A; He, Bin
2005-06-07
We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.
Nonlinear Statistical Estimation with Numerical Maximum Likelihood
1974-10-01
probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator
Hierarchical models for estimating density from DNA mark-recapture studies
Gardner, B.; Royle, J. Andrew; Wegan, M.T.
2009-01-01
Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.
Estimations of population density for selected periods between the Neolithic and AD 1800.
Zimmermann, Andreas; Hilpert, Johanna; Wendt, Karl Peter
2009-04-01
Abstract We describe a combination of methods applied to obtain reliable estimations of population density using archaeological data. The combination is based on a hierarchical model of scale levels. The necessary data and methods used to obtain the results are chosen so as to define transfer functions from one scale level to another. We apply our method to data sets from western Germany that cover early Neolithic, Iron Age, Roman, and Merovingian times as well as historical data from AD 1800. Error margins and natural and historical variability are discussed. Our results for nonstate societies are always lower than conventional estimations compiled from the literature, and we discuss the reasons for this finding. At the end, we compare the calculated local and global population densities with other estimations from different parts of the world.
Wen, Xiaotong; Rangarajan, Govindan; Ding, Mingzhou
2013-01-01
Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix. PMID:23858479
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less
NASA Astrophysics Data System (ADS)
Leherte, L.; Allen, F. H.; Vercauteren, D. P.
1995-04-01
A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.
NASA Astrophysics Data System (ADS)
Leherte, Laurence; Allen, Frank H.
1994-06-01
A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Measurement of operator workload in an information processing task
NASA Technical Reports Server (NTRS)
Jenney, L. L.; Older, H. J.; Cameron, B. J.
1972-01-01
This was an experimental study to develop an improved methodology for measuring workload in an information processing task and to assess the effects of shift length and communication density (rate of information flow) on the ability to process and classify verbal messages. Each of twelve subjects was exposed to combinations of three shift lengths and two communication densities in a counterbalanced, repeated measurements experimental design. Results indicated no systematic variation in task performance measures or in other dependent measures as a function of shift length or communication density. This is attributed to the absence of a secondary loading task, an insufficiently taxing work schedule, and the lack of psychological stress. Subjective magnitude estimates of workload showed fatigue (and to a lesser degree, tension) to be a power function of shift length. Estimates of task difficulty and fatigue were initially lower but increased more sharply over time under low density than under high density conditions. An interpretation of findings and recommedations for furture research are included. This research has major implications to human workload problems in information processing of air traffic control verbal data.
Carroll, Raymond J; Delaigle, Aurore; Hall, Peter
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Evaluation of Techniques Used to Estimate Cortical Feature Maps
Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2011-01-01
Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537
Locality of correlation in density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Kieron; Cancio, Antonio; Gould, Tim
The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that E{sub C} → −A{sub C} ZlnZ +more » B{sub C}Z as Z → ∞, where Z is the atomic number, A{sub C} is known, and we estimate B{sub C} to be about 37 mhartree. The local density approximation yields A{sub C} exactly, but a very incorrect value for B{sub C}, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with B{sub C} a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed.« less
DENSITY: software for analysing capture-recapture data from passive detector arrays
Efford, M.G.; Dawson, D.K.; Robbins, C.S.
2004-01-01
A general computer-intensive method is described for fitting spatial detection functions to capture-recapture data from arrays of passive detectors such as live traps and mist nets. The method is used to estimate the population density of 10 species of breeding birds sampled by mist-netting in deciduous forest at Patuxent Research Refuge, Laurel, Maryland, U.S.A., from 1961 to 1972. Total density (9.9 ? 0.6 ha-1 mean ? SE) appeared to decline over time (slope -0.41 ? 0.15 ha-1y-1). The mean precision of annual estimates for all 10 species pooled was acceptable (CV(D) = 14%). Spatial analysis of closed-population capture-recapture data highlighted deficiencies in non-spatial methodologies. For example, effective trapping area cannot be assumed constant when detection probability is variable. Simulation may be used to evaluate alternative designs for mist net arrays where density estimation is a study goal.
NASA Astrophysics Data System (ADS)
Hayden, T. G.; Kominz, M. A.; Magens, D.; Niessen, F.
2009-12-01
We have estimated ice thicknesses at the AND-1B core during the Last Glacial Maximum by adapting an existing technique to calculate overburden. As ice thickness at Last Glacial Maximum is unknown in existing ice sheet reconstructions, this analysis provides constraint on model predictions. We analyze the porosity as a function of depth and lithology from measurements taken on the AND-1B core, and compare these results to a global dataset of marine, normally compacted sediments compiled from various legs of ODP and IODP. Using this dataset we are able to estimate the amount of overburden required to compact the sediments to the porosity observed in AND-1B. This analysis is a function of lithology, depth and porosity, and generates estimates ranging from zero to 1,000 meters. These overburden estimates are based on individual lithologies, and are translated into ice thickness estimates by accounting for both sediment and ice densities. To do this we use a simple relationship of Xover * (ρsed/ρice) = Xice; where Xover is the overburden thickness, ρsed is sediment density (calculated from lithology and porosity), ρice is the density of glacial ice (taken as 0.85g/cm3), and Xice is the equalivant ice thickness. The final estimates vary considerably, however the “Best Estimate” behavior of the 2 lithologies most likely to compact consistently is remarkably similar. These lithologies are the clay and silt units (Facies 2a/2b) and the diatomite units (Facies 1a) of AND-1B. These lithologies both produce best estimates of approximately 1,000 meters of ice during Last Glacial Maximum. Additionally, while there is a large range of possible values, no combination of reasonable lithology, compaction, sediment density, or ice density values result in an estimate exceeding 1,900 meters of ice. This analysis only applies to ice thicknesses during Last Glacial Maximum, due to the overprinting effect of Last Glacial Maximum on previous ice advances. Analysis of the AND-2A core is underway, and results will be compared to those of AND-1B.
NASA Astrophysics Data System (ADS)
Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.
2018-02-01
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Modelling population distribution using remote sensing imagery and location-based data
NASA Astrophysics Data System (ADS)
Song, J.; Prishchepov, A. V.
2017-12-01
Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.
Birds and insects as radar targets - A review
NASA Technical Reports Server (NTRS)
Vaughn, C. R.
1985-01-01
A review of radar cross-section measurements of birds and insects is presented. A brief discussion of some possible theoretical models is also given and comparisons made with the measurements. The comparisons suggest that most targets are, at present, better modeled by a prolate spheroid having a length-to-width ratio between 3 and 10 than by the often used equivalent weight water sphere. In addition, many targets observed with linear horizontal polarization have maximum cross sections much better estimated by a resonant half-wave dipole than by a water sphere. Also considered are birds and insects in the aggregate as a local radar 'clutter' source. Order-of-magnitude estimates are given for many reasonable target number densities. These estimates are then used to predict X-band volume reflectivities. Other topics that are of interest to the radar engineer are discussed, including the doppler bandwidth due to the internal motions of a single bird, the radar cross-section probability densities of single birds and insects, the variability of the functional form of the probability density functions, and the Fourier spectra of single birds and insects.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Very High-Frequency (VHF) ionospheric scintillation fading measurements at Lima, Peru
NASA Technical Reports Server (NTRS)
Blank, H. A.; Golden, T. S.
1972-01-01
During the spring equinox of 1970, scintillating signals at VHF (136.4 MHz) were observed at Lima, Peru. The transmission originated from ATS 3 and was observed through a pair of antennas spaced 1200 feet apart on an east-west baseline. The empirical data were digitized, reduced, and analyzed. The results include amplitude probability density and distribution functions, time autocorrelation functions, cross correlation functions for the spaced antennas, and appropriate spectral density functions. Results show estimates of the statistics of the ground diffraction pattern to gain insight into gross ionospheric irregularity size, and irregularity velocity in the antenna planes.
Estimation of option-implied risk-neutral into real-world density by using calibration function
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-04-01
Option prices contain crucial information that can be used as a reflection of future development of an underlying assets' price. The main objective of this study is to extract the risk-neutral density (RND) and the risk-world density (RWD) of option prices. A volatility function technique is applied by using a fourth order polynomial interpolation to obtain the RNDs. Then, a calibration function is used to convert the RNDs into RWDs. There are two types of calibration function which are parametric and non-parametric calibrations. The density is extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity from January 2009 until December 2015. The performance of RNDs and RWDs extracted are evaluated by using a density forecasting test. This study found out that the RWDs obtain can provide an accurate information regarding the price of the underlying asset in future compared to that of the RNDs. In addition, empirical evidence suggests that RWDs from a non-parametric calibration has a better accuracy than other densities.
Cost and performance model for redox flow batteries
NASA Astrophysics Data System (ADS)
Viswanathan, Vilayanur; Crawford, Alasdair; Stephenson, David; Kim, Soowhan; Wang, Wei; Li, Bin; Coffey, Greg; Thomsen, Ed; Graff, Gordon; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent
2014-02-01
A cost model is developed for all vanadium and iron-vanadium redox flow batteries. Electrochemical performance modeling is done to estimate stack performance at various power densities as a function of state of charge and operating conditions. This is supplemented with a shunt current model and a pumping loss model to estimate actual system efficiency. The operating parameters such as power density, flow rates and design parameters such as electrode aspect ratio and flow frame channel dimensions are adjusted to maximize efficiency and minimize capital costs. Detailed cost estimates are obtained from various vendors to calculate cost estimates for present, near-term and optimistic scenarios. The most cost-effective chemistries with optimum operating conditions for power or energy intensive applications are determined, providing a roadmap for battery management systems development for redox flow batteries. The main drivers for cost reduction for various chemistries are identified as a function of the energy to power ratio of the storage system. Levelized cost analysis further guide suitability of various chemistries for different applications.
Zhao, Zhibiao
2011-06-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
Measurement Model Nonlinearity in Estimation of Dynamical Systems
NASA Astrophysics Data System (ADS)
Majji, Manoranjan; Junkins, J. L.; Turner, J. D.
2012-06-01
The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.
NASA Astrophysics Data System (ADS)
Bura, E.; Zhmurov, A.; Barsegov, V.
2009-01-01
Dynamic force spectroscopy and steered molecular simulations have become powerful tools for analyzing the mechanical properties of proteins, and the strength of protein-protein complexes and aggregates. Probability density functions of the unfolding forces and unfolding times for proteins, and rupture forces and bond lifetimes for protein-protein complexes allow quantification of the forced unfolding and unbinding transitions, and mapping the biomolecular free energy landscape. The inference of the unknown probability distribution functions from the experimental and simulated forced unfolding and unbinding data, as well as the assessment of analytically tractable models of the protein unfolding and unbinding requires the use of a bandwidth. The choice of this quantity is typically subjective as it draws heavily on the investigator's intuition and past experience. We describe several approaches for selecting the "optimal bandwidth" for nonparametric density estimators, such as the traditionally used histogram and the more advanced kernel density estimators. The performance of these methods is tested on unimodal and multimodal skewed, long-tailed distributed data, as typically observed in force spectroscopy experiments and in molecular pulling simulations. The results of these studies can serve as a guideline for selecting the optimal bandwidth to resolve the underlying distributions from the forced unfolding and unbinding data for proteins.
Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations
NASA Astrophysics Data System (ADS)
Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik
2009-04-01
Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.
Jiang, Shenghang; Park, Seongjin; Challapalli, Sai Divya; Fei, Jingyi; Wang, Yong
2017-01-01
We report a robust nonparametric descriptor, J′(r), for quantifying the density of clustering molecules in single-molecule localization microscopy. J′(r), based on nearest neighbor distribution functions, does not require any parameter as an input for analyzing point patterns. We show that J′(r) displays a valley shape in the presence of clusters of molecules, and the characteristics of the valley reliably report the clustering features in the data. Most importantly, the position of the J′(r) valley (rJm′) depends exclusively on the density of clustering molecules (ρc). Therefore, it is ideal for direct estimation of the clustering density of molecules in single-molecule localization microscopy. As an example, this descriptor was applied to estimate the clustering density of ptsG mRNA in E. coli bacteria. PMID:28636661
Thermodynamically constrained correction to ab initio equations of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
French, Martin; Mattsson, Thomas R.
2014-07-07
We show how equations of state generated by density functional theory methods can be augmented to match experimental data without distorting the correct behavior in the high- and low-density limits. The technique is thermodynamically consistent and relies on knowledge of the density and bulk modulus at a reference state and an estimation of the critical density of the liquid phase. We apply the method to four materials representing different classes of solids: carbon, molybdenum, lithium, and lithium fluoride. It is demonstrated that the corrected equations of state for both the liquid and solid phases show a significantly reduced dependence ofmore » the exchange-correlation functional used.« less
Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif
2014-11-01
Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less
Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette
2010-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.
Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette
2010-01-01
The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1 sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Equation of state for detonation product gases
NASA Astrophysics Data System (ADS)
Nagayama, Kunihito; Kubota, Shiro
2003-03-01
A thermodynamic analysis procedure of the detonation product equation of state (EOS) together with the experimental data set of the detonation velocity as a function of initial density has been formulated. The Chapman-Jouguet (CJ) state [W. Ficket and W. C. Davis, Detonation: Theory and Experiment (University of California Press, Berkeley 1979)] on the p-ν plane is found to be well approximated by the envelope function formed by the collection of Rayleigh lines with many different initial density states. The Jones-Stanyukovich-Manson relation [W. Ficket and W. C. Davis, Detonation: Theory and Experiment (University of California Press, Berkeley, 1979)] is used to estimate the error included in this approximation. Based on this analysis, a simplified integration method to calculate the Grüneisen parameter along the CJ state curve with different initial densities utilizing the cylinder expansion data has been presented. The procedure gives a simple way of obtaining the EOS function, compatible with the detonation velocity data. Theoretical analysis has been performed for the precision of the estimated EOS function. EOS of the pentaerithrytoltetranitrate explosive is calculated and compared with some of the experimental data such as CJ pressure data and cylinder expansion data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; ...
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Estimation of vegetation cover at subpixel resolution using LANDSAT data
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1986-01-01
The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.
Galaxy-galaxy lensing estimators and their covariance properties
NASA Astrophysics Data System (ADS)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
Dolphin biosonar target detection in noise: wrap up of a past experiment.
Au, Whitlow W L
2014-07-01
The target detection capability of bottlenose dolphins in the presence of artificial masking noise was first studied by Au and Penner [J. Acoust. Soc. Am. 70, 687-693 (1981)] in which the dolphins' target detection threshold was determined as a function of the ratio of the echo energy flux density and the estimated received noise spectral density. Such a metric was commonly used in human psychoacoustics despite the fact that the echo energy flux density is not compatible with noise spectral density which is averaged intensity per Hz. Since the earlier detection in noise studies, two important parameters, the dolphin integration time applicable to broadband clicks and the dolphin's auditory filter shape, were determined. The inclusion of these two parameters allows for the estimation of the received energy flux density of the masking noise so that the dolphin target detection can now be determined as a function of the ratio of the received energy of the echo over the received noise energy. Using an integration time of 264 μs and an auditory bandwidth of 16.7 kHz, the ratio of the echo energy to noise energy at the target detection threshold is approximately 1 dB.
Saturated hydraulic conductivity of US soils grouped according to textural class and bulk density
USDA-ARS?s Scientific Manuscript database
Importance of the saturated hydraulic conductivity as soil hydraulic property led to the development of multiple pedotransfer functions for estimating it. One approach to estimating Ksat was using textural classes rather than specific textural fraction contents as pedotransfer inputs. The objective...
Saturated hydraulic conductivity of US soils grouped according textural class and bulk density
USDA-ARS?s Scientific Manuscript database
Importance of the saturated hydraulic conductivity as soil hydraulic property led to the development of multiple pedotransfer functions for estimating it. One approach to estimating Ksat was using textural classes rather than specific textural fraction contents as pedotransfer inputs. The objective...
Online Reinforcement Learning Using a Probability Density Estimation.
Agostini, Alejandro; Celaya, Enric
2017-01-01
Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.
Using the Opposition Effect in Remotely Sensed Data to Assist in the Retrieval of Bulk Density
NASA Astrophysics Data System (ADS)
Ambeau, Brittany L.
Bulk density is an important geophysical property that impacts the mobility of military vehicles and personnel. Accurate retrieval of bulk density from remotely sensed data is, therefore, needed to estimate the mobility on "off-road" terrain. For a particulate surface, the functional form of the opposition effect can provide valuable information about composition and structure. In this research, we examine the relationship between bulk density and angular width of the opposition effect for a controlled set of laboratory experiments. Given a sample with a known bulk density, we collect reflectance measurements on a spherical grid for various illumination and view geometries -- increasing the amount of reflectance measurements collected at small phase angles near the opposition direction. Bulk densities are varied using a custom-made pluviation device, samples are measured using the Goniometer of the Rochester Institute of Technology-Two (GRIT-T), and observations are fit to the Hapke model using a grid-search method. The method that is selected allows for the direct estimation of five parameters: the single-scattering albedo, the amplitude of the opposition effect, the angular width of the opposition effect, and the two parameters that describe the single-particle phase function. As a test of the Hapke model, the retrieved bulk densities are compared to the known bulk densities. Results show that with an increase in the availability of multi-angular reflectance measurements, the prospects for retrieving the spatial distribution of bulk density from satellite and airborne sensors are imminent.
Are fractal dimensions of the spatial distribution of mineral deposits meaningful?
Raines, G.L.
2008-01-01
It has been proposed that the spatial distribution of mineral deposits is bifractal. An implication of this property is that the number of deposits in a permissive area is a function of the shape of the area. This is because the fractal density functions of deposits are dependent on the distance from known deposits. A long thin permissive area with most of the deposits in one end, such as the Alaskan porphyry permissive area, has a major portion of the area far from known deposits and consequently a low density of deposits associated with most of the permissive area. On the other hand, a more equi-dimensioned permissive area, such as the Arizona porphyry permissive area, has a more uniform density of deposits. Another implication of the fractal distribution is that the Poisson assumption typically used for estimating deposit numbers is invalid. Based on datasets of mineral deposits classified by type as inputs, the distributions of many different deposit types are found to have characteristically two fractal dimensions over separate non-overlapping spatial scales in the range of 5-1000 km. In particular, one typically observes a local dimension at spatial scales less than 30-60 km, and a regional dimension at larger spatial scales. The deposit type, geologic setting, and sample size influence the fractal dimensions. The consequence of the geologic setting can be diminished by using deposits classified by type. The crossover point between the two fractal domains is proportional to the median size of the deposit type. A plot of the crossover points for porphyry copper deposits from different geologic domains against median deposit sizes defines linear relationships and identifies regions that are significantly underexplored. Plots of the fractal dimension can also be used to define density functions from which the number of undiscovered deposits can be estimated. This density function is only dependent on the distribution of deposits and is independent of the definition of the permissive area. Density functions for porphyry copper deposits appear to be significantly different for regions in the Andes, Mexico, United States, and western Canada. Consequently, depending on which regional density function is used, quite different estimates of numbers of undiscovered deposits can be obtained. These fractal properties suggest that geologic studies based on mapping at scales of 1:24,000 to 1:100,000 may not recognize processes that are important in the formation of mineral deposits at scales larger than the crossover points at 30-60 km. ?? 2008 International Association for Mathematical Geology.
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Arnold, James (Technical Monitor)
1999-01-01
The dissociation of WF6 and the related singly-charged cations and anions into the lower fluorides and fluorine atoms has been investigated theoretically using density functional theory (B3LYP) and relativistic effective core potentials, with estimates of spin-orbit effects included using a simple model. The inclusion of spin-orbit is essential for a correct description of the thermochemistry. The total atomization energy of the neutral and anionic WF6 is reproduced to within 25 kcal/mol, but comparison of individual bond dissociation energies with available experimental data shows discrepancies of up to 10 kcal/mol. The results are nevertheless useful to help resolve discrepancies in experimental data and provide estimates of missing data.
Direct estimations of linear and nonlinear functionals of a quantum state.
Ekert, Artur K; Alves, Carolina Moura; Oi, Daniel K L; Horodecki, Michał; Horodecki, Paweł; Kwek, L C
2002-05-27
We present a simple quantum network, based on the controlled-SWAP gate, that can extract certain properties of quantum states without recourse to quantum tomography. It can be used as a basic building block for direct quantum estimations of both linear and nonlinear functionals of any density operator. The network has many potential applications ranging from purity tests and eigenvalue estimations to direct characterization of some properties of quantum channels. Experimental realizations of the proposed network are within the reach of quantum technology that is currently being developed.
NASA Technical Reports Server (NTRS)
Gomez, Elena del V.; Garland, Jay L.; Roberts, Michael S.
2004-01-01
The present work tested whether the relationship between functional traits and inoculum density reflected structural diversity in bacterial communities from a land-use intensification gradient applying a mathematical model. Terminal restriction fragment length polymorphism (T-RFLP) analysis was also performed to provide an independent assessment of species richness. Successive 10-fold dilutions of a soil suspension were inoculated onto Biolog GN(R) microplates. Soil bacterial density was determined by total cell and plate counts. The relationship between phenotypic traits and inoculum density fit the model, allowing the estimation of maximal phenotypic potential (Rmax) and inoculum density (KI) at which Rmax will be half-reduced. Though Rmax decreased with time elapsed since clearing of native vegetation, KI remained high in two of the disturbed sites. The genetic pool of bacterial community did not experience a significant reduction, but the active fraction responding in the Biolog assay was adversely affected, suggesting a reduction in the functional potential. c2004 Federation of European Microbiological Societies. Published by Elsevier B.V. All rights reserved.
An Equation of State for Hypersaline Water in Great Salt Lake, Utah, USA
Naftz, D.L.; Millero, F.J.; Jones, B.F.; Green, W.R.
2011-01-01
Great Salt Lake (GSL) is one of the largest and most saline lakes in the world. In order to accurately model limnological processes in GSL, hydrodynamic calculations require the precise estimation of water density (??) under a variety of environmental conditions. An equation of state was developed with water samples collected from GSL to estimate density as a function of salinity and water temperature. The ?? of water samples from the south arm of GSL was measured as a function of temperature ranging from 278 to 323 degrees Kelvin (oK) and conductivity salinities ranging from 23 to 182 g L-1 using an Anton Paar density meter. These results have been used to develop the following equation of state for GSL (?? = ?? 0.32 kg m-3): ?? - ??0 = 184.01062 + 1.04708 * S - 1.21061*T + 3.14721E - 4*S2 + 0.00199T2 where ??0 is the density of pure water in kg m-3, S is conductivity salinity g L-1, and T is water temperature in degrees Kelvin. ?? 2011 U.S. Government.
Capture-recapture of white-tailed deer using DNA from fecal pellet-groups
Goode, Matthew J; Beaver, Jared T; Muller, Lisa I; Clark, Joseph D.; van Manen, Frank T.; Harper, Craig T; Basinger, P Seth
2014-01-01
Traditional methods for estimating white-tailed deer population size and density are affected by behavioral biases, poor detection in densely forested areas, and invalid techniques for estimating effective trapping area. We evaluated a noninvasive method of capture—recapture for white-tailed deer (Odocoileus virginianus) density estimation using DNA extracted from fecal pellets as an individual marker and for gender determination, coupled with a spatial detection function to estimate density (spatially explicit capture—recapture, SECR). We collected pellet groups from 11 to 22 January 2010 at randomly selected sites within a 1-km2 area located on Arnold Air Force Base in Coffee and Franklin counties, Tennessee. We searched 703 10-m radius plots and collected 352 pellet-group samples from 197 plots over five two-day sampling intervals. Using only the freshest pellets we recorded 140 captures of 33 different animals (15M:18F). Male and female densities were 1.9 (SE = 0.8) and 3.8 (SE = 1.3) deer km-2, or a total density of 5.8 deer km-2 (14.9 deer mile-2). Population size was 20.8 (SE = 7.6) over a 360-ha area, and sex ratio was 1.0 M: 2.0 F (SE = 0.71). We found DNA sampling from pellet groups improved deer abundance, density and sex ratio estimates in contiguous landscapes which could be used to track responses to harvest or other management actions.
Design and Processing of Electret Structures
2009-10-31
and width as a function of time. ( d ) Estimated current density j of dissolving copper disk as a function of time. (e) Total current I of dissolving...effect leading to a higher corrosion rate in the galvanic microreactor . Because of the small scale of our galvanic system, the dissolving copper disk is...estimated by focusing with a calibrated microscope stage. Figure 5: Particle separation and electrolyte convection. Scale bars in ( A , D ) are 100 µm
USDA-ARS?s Scientific Manuscript database
Saturated hydraulic conductivity Ksat is a fundamental characteristic in modeling flow and contaminant transport in soils and sediments. Therefore, many models have been developed to estimate Ksat from easily measureable parameters, such as textural properties, bulk density, etc. However, Ksat is no...
Electronic polarizability of light crude oil from optical and dielectric studies
NASA Astrophysics Data System (ADS)
George, A. K.; Singh, R. N.
2017-07-01
In the present paper we report the temperature dependence of density, refractive indices and dielectric constant of three samples of crude oils. The API gravity number estimated from the temperature dependent density studies revealed that the three samples fall in the category of light oil. The measured data of refractive index and the density are used to evaluate the polarizability of these fluids. Molar refractive index and the molar volume are evaluated through Lorentz-Lorenz equation. The function of the refractive index, FRI , divided by the mass density ρ, is a constant approximately equal to one-third and is invariant with temperature for all the samples. The measured values of the dielectric constant decrease linearly with increasing temperature for all the samples. The dielectric constant estimated from the refractive index measurements using Lorentz-Lorentz equation agrees well with the measured values. The results are promising since all the three measured properties complement each other and offer a simple and reliable method for estimating crude oil properties, in the absence of sufficient data.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
A new estimator method for GARCH models
NASA Astrophysics Data System (ADS)
Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.
2007-06-01
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.
Multi-Paradigm Multi-Scale Simulations for Fuel Cell Catalysts and Membranes
2006-01-01
transfer studies on model systems. . Applying newly developed density functionals QM ( X3LYP ) for estimating the thermodynamics and kinetic energy...Density functional theory methods We have used many QM methods to probe chemical reaction mechanisms and find that the B3LYP and X3LYP [6] flavors of DFT...carried out QM calculations on the surface reactivity of the Pt and PtRu anode catalysts. This QM uses a new ab initio DFT-GGA method ( X3LYP ) [6
NASA Astrophysics Data System (ADS)
Chen, Biao; Ruth, Chris; Jing, Zhenxue; Ren, Baorui; Smith, Andrew; Kshirsagar, Ashwini
2014-03-01
Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.
Nonparametric model validations for hidden Markov models with applications in financial econometrics
Zhao, Zhibiao
2011-01-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601
Modal density function and number of propagating modes in ducts
NASA Technical Reports Server (NTRS)
Rice, E. J.
1976-01-01
The question of the number of propagating modes within a small range of mode cut off ratio was raised. The population density of modes were shown to be greatest near cut off and least for the well propagating modes. It was shown that modes of nearly the same cut off ratio behave nearly the same in a sound absorbing duct as well as in the way they propagate to the far. Handling all of the propagating modes individually, they can be grouped into several cut off ratio ranges. It is important to know the modal density function to estimate acoustic power distribution.
Klinger, R.; Rejmanek, M.
2009-01-01
Despite their potential to provide mechanistic explanations of rates of seed dispersal and seed fate, the functional and numerical responses of seed predators have never been explicitly examined within this context. Therefore, we investigated the numerical response of a small-mammal seed predator, Heteromys desmarestianus, to disturbance-induced changes in food availability and evaluated the degree to which removal and fate of seeds of eight tree species in a lowland tropical forest in Belize were related to the functional response of H. desmarestianus to varying seed densities. Mark-recapture trapping was used to estimate abundance of H. desmarestianus in six 0.5-ha grids from July 2000 to September 2002. Fruit availability and seed fate were estimated in each grid, and two experiments nested within the grids were used to determine (1) the form of the functional response for nine levels of fruit density (2-32 fruits/m 2), (2) the removal rate and handling times, and (3) the total proportion of fruits removed. The total proportion of fruits removed was determined primarily by the numerical response of H. desmarestianus to fruit availability, while removal rates and the proportion of seeds eaten or cached were related primarily to the form of the functional response. However, the numerical and functional responses interacted; H. desmarestianus showed strong spatial and temporal numerical responses to total fruit availability, and their density relative to fruit availability resulted in variation in the form of the functional response. Types I, II, and III functional responses were observed, as were density-independent responses, and these responses varied both among and within fruit species. The highest proportions of fruits were eaten when the Type III functional response was detected, which was when fruit availability was high relative to H. desmarestianus population density. Numerous idiosyncratic influences on seed fate have been documented, but our results indicate that shifts in the numerical and functional responses of seed predators to seasonal and interannual variation in seed availability potentially provide a general mechanistic explanation for patterns of removal and fate for vertebrate-dispersed seeds. ?? 2009 by the Ecological Society of America.
Physical Quality Indicators and Mechanical Behavior of Agricultural Soils of Argentina.
Imhoff, Silvia; da Silva, Alvaro Pires; Ghiberto, Pablo J; Tormena, Cássio A; Pilatti, Miguel A; Libardi, Paulo L
2016-01-01
Mollisols of Santa Fe have different tilth and load support capacity. Despite the importance of these attributes to achieve a sustainable crop production, few information is available. The objectives of this study are i) to assess soil physical indicators related to plant growth and to soil mechanical behavior; and ii) to establish relationships to estimate the impact of soil loading on the soil quality to plant growth. The study was carried out on Argiudolls and Hapludolls of Santa Fe. Soil samples were collected to determine texture, organic matter content, bulk density, water retention curve, soil resistance to penetration, least limiting water range, critical bulk density for plant growth, compression index, pre-consolidation pressure and soil compressibility. Water retention curve and soil resistance to penetration were linearly and significantly related to clay and organic matter (R2 = 0.91 and R2 = 0.84). The pedotransfer functions of water retention curve and soil resistance to penetration allowed the estimation of the least limiting water range and critical bulk density for plant growth. A significant nonlinear relationship was found between critical bulk density for plant growth and clay content (R2 = 0.98). Compression index was significantly related to bulk density, water content, organic matter and clay plus silt content (R2 = 0.77). Pre-consolidation pressure was significantly related to organic matter, clay and water content (R2 = 0.77). Soil compressibility was significantly related to initial soil bulk density, clay and water content. A nonlinear and significantly pedotransfer function (R2 = 0.88) was developed to predict the maximum acceptable pressure to be applied during tillage operations by introducing critical bulk density for plant growth in the compression model. The developed pedotransfer function provides a useful tool to link the mechanical behavior and tilth of the soils studied.
Physical Quality Indicators and Mechanical Behavior of Agricultural Soils of Argentina
Pires da Silva, Alvaro; Ghiberto, Pablo J.; Tormena, Cássio A.; Pilatti, Miguel A.; Libardi, Paulo L.
2016-01-01
Mollisols of Santa Fe have different tilth and load support capacity. Despite the importance of these attributes to achieve a sustainable crop production, few information is available. The objectives of this study are i) to assess soil physical indicators related to plant growth and to soil mechanical behavior; and ii) to establish relationships to estimate the impact of soil loading on the soil quality to plant growth. The study was carried out on Argiudolls and Hapludolls of Santa Fe. Soil samples were collected to determine texture, organic matter content, bulk density, water retention curve, soil resistance to penetration, least limiting water range, critical bulk density for plant growth, compression index, pre-consolidation pressure and soil compressibility. Water retention curve and soil resistance to penetration were linearly and significantly related to clay and organic matter (R2 = 0.91 and R2 = 0.84). The pedotransfer functions of water retention curve and soil resistance to penetration allowed the estimation of the least limiting water range and critical bulk density for plant growth. A significant nonlinear relationship was found between critical bulk density for plant growth and clay content (R2 = 0.98). Compression index was significantly related to bulk density, water content, organic matter and clay plus silt content (R2 = 0.77). Pre-consolidation pressure was significantly related to organic matter, clay and water content (R2 = 0.77). Soil compressibility was significantly related to initial soil bulk density, clay and water content. A nonlinear and significantly pedotransfer function (R2 = 0.88) was developed to predict the maximum acceptable pressure to be applied during tillage operations by introducing critical bulk density for plant growth in the compression model. The developed pedotransfer function provides a useful tool to link the mechanical behavior and tilth of the soils studied. PMID:27099925
Lozach, Sophie; Dauvin, Jean-Claude; Méar, Yann; Murat, Anne; Davoult, Dominique; Migné, Aline
2011-12-01
Sampling the sea bottom surface remains difficult because of the surface hydraulic shock due to water flowing through the gear (i.e., the bow wave effect) and the loss of epifauna organisms due to the gear's closing mechanism. Slow-moving mobile epifauna, such as the ophiuroid Ophiothrix fragilis, form high-density patches in the English Channel, not only on pebbles like in the Dover Strait or offshore Brittany but also on gravel in the Bay of Seine (>5000 ind m(-2)). Such populations form high biomasses and control the water transfer from the water column to the sediment. Estimating their real density and biomass is essential for the assessment of benthic ecosystem functioning using trophic web modelling. In this paper, we present and discuss the patch patterns and sampling efficiency of the different methods for collecting in the dense beds of O. fragilis in the Bay of Seine. The large Hamon grab (0.25 m(-2)) highly under-estimated the ophiuroid density, while the Smith McIntyre appeared adequate among the tested sampling grabs. Nowadays, diving sampling, underwater photography and videos with remote operated vehicle appear to be the recommended alternatives to estimate the real density of such dense slow-moving mobile epifauna. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Dhainaut, Jean-Michel
2000-01-01
The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.
Mathematical models for nonparametric inferences from line transect data
Burnham, K.P.; Anderson, D.R.
1976-01-01
A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reimund, Kevin K.; McCutcheon, Jeffrey R.; Wilson, Aaron D.
A general method was developed for estimating the volumetric energy efficiency of pressure retarded osmosis via pressure-volume analysis of a membrane process. The resulting model requires only the osmotic pressure, π, and mass fraction, w, of water in the concentrated and dilute feed solutions to estimate the maximum achievable specific energy density, uu, as a function of operating pressure. The model is independent of any membrane or module properties. This method utilizes equilibrium analysis to specify the volumetric mixing fraction of concentrated and dilute solution as a function of operating pressure, and provides results for the total volumetric energy densitymore » of similar order to more complex models for the mixing of seawater and riverwater. Within the framework of this analysis, the total volumetric energy density is maximized, for an idealized case, when the operating pressure is π/(1+√w⁻¹), which is lower than the maximum power density operating pressure, Δπ/2, derived elsewhere, and is a function of the solute osmotic pressure at a given mass fraction. It was also found that a minimum 1.45 kmol of ideal solute is required to produce 1 kWh of energy while a system operating at “maximum power density operating pressure” requires at least 2.9 kmol. Utilizing this methodology, it is possible to examine the effects of volumetric solution cost, operation of a module at various pressure, and operation of a constant pressure module with various feed.« less
Gaussian windows: A tool for exploring multivariate data
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1990-01-01
Presented here is a method for interactively exploring a large set of quantitative multivariate data, in order to estimate the shape of the underlying density function. It is assumed that the density function is more or less smooth, but no other specific assumptions are made concerning its structure. The local structure of the data in a given region may be examined by viewing the data through a Gaussian window, whose location and shape are chosen by the user. A Gaussian window is defined by giving each data point a weight based on a multivariate Gaussian function. The weighted sample mean and sample covariance matrix are then computed, using the weights attached to the data points. These quantities are used to compute an estimate of the shape of the density function in the window region. The local structure of the data is described by a method similar to the method of principal components. By taking many such local views of the data, we can form an idea of the structure of the data set. The method is applicable in any number of dimensions. The method can be used to find and describe simple structural features such as peaks, valleys, and saddle points in the density function, and also extended structures in higher dimensions. With some practice, we can apply our geometrical intuition to these structural features in any number of dimensions, so that we can think about and describe the structure of the data. Since the computations involved are relatively simple, the method can easily be implemented on a small computer.
Khorozyan, Igor G; Malkhasyan, Alexander G; Abramov, Alexei V
2008-12-01
It is important to predict how many individuals of a predator species can survive in a given area on the basis of prey sufficiency and to compare predictive estimates with actual numbers to understand whether or not key threats are related to prey availability. Rugged terrain and low detection probabilities do not allow for the use of traditional prey count techniques in mountain areas. We used presence-absence occupancy modeling and camera-trapping to estimate the abundance and densities of prey species and regression analysis to predict leopard (Panthera pardus) densities from estimated prey biomass in the mountains of the Nuvadi area, Meghri Ridge, southern Armenia. The prey densities were 12.94 ± 2.18 individuals km(-2) for the bezoar goat (Capra aegagrus), 6.88 ± 1.56 for the wild boar (Sus scrofa) and 0.44 ± 0.20 for the roe deer (Capreolus capreolus). The detection probability of the prey was a strong function of the activity patterns, and was highest in diurnal bezoar goats (0.59 ± 0.09). Based on robust regression, the estimated total ungulate prey biomass (720.37 ± 142.72 kg km(-2) ) can support a leopard density of 7. 18 ± 3.06 individuals 100 km(-2) . The actual leopard density is only 0.34 individuals 100 km(-2) (i.e. one subadult male recorded over the 296.9 km(2) ), estimated from tracking and camera-trapping. The most plausible explanation for this discrepancy between predicted and actual leopard density is that poaching and disturbance caused by livestock breeding, plant gathering, deforestation and human-induced wild fires are affecting the leopard population in Armenia. © 2008 ISZS, Blackwell Publishing and IOZ/CAS.
New method for estimating low-earth-orbit collision probabilities
NASA Technical Reports Server (NTRS)
Vedder, John D.; Tabor, Jill L.
1991-01-01
An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Seasonal Variability in Global Eddy Diffusion and the Effect on Thermospheric Neutral Density
NASA Astrophysics Data System (ADS)
Pilinski, M.; Crowley, G.
2014-12-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time between January 2004 and January 2008 were estimated from residuals of neutral density measurements made by the CHallenging Minisatellite Payload (CHAMP) and simulations made using the Thermosphere Ionosphere Mesosphere Electrodynamics - Global Circulation Model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy-diffusivity models. The eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the RMS difference between the TIME-GCM model and density data from a variety of satellites is reduced by an average of 5%. This result, indicates that global thermospheric density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates how eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are some limitations of this method, which are discussed, including that the latitude-dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion consistent with diffusion observations made by other techniques.
Seasonal variability in global eddy diffusion and the effect on neutral density
NASA Astrophysics Data System (ADS)
Pilinski, M. D.; Crowley, G.
2015-04-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time were estimated from residuals of neutral density measurements made by the Challenging Minisatellite Payload (CHAMP) and simulations made using the thermosphere-ionosphere-mesosphere electrodynamics global circulation model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy diffusivity models. Eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the root-mean-square sum for the TIME-GCM model is reduced by an average of 5% when compared to density data from a variety of satellites, indicating that the fidelity of global density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates that eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are limitations to this method, which are discussed, including that the latitude dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion which is also consistent with diffusion observations made by other techniques.
Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials
Carleton, James B.; D'Amore, Antonio; Feaver, Kristen R.; Rodin, Gregory J.; Sacks, Michael S.
2014-01-01
Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. The present work addresses these issues in two ways. First, using methods of geometric probability we develop theoretical estimates for the mean linear and areal fiber intersection densities for two-dimensional fibrous networks. These densities are expressed in terms of the fiber density and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of two-dimensional fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of SEM images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. The methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data. PMID:25311685
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Generalized local emission tomography
Katsevich, Alexander J.
1998-01-01
Emission tomography enables locations and values of internal isotope density distributions to be determined from radiation emitted from the whole object. In the method for locating the values of discontinuities, the intensities of radiation emitted from either the whole object or a region of the object containing the discontinuities are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the isotope density discontinuity. The asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) knowing pointwise values of the attenuation coefficient within the object. In the method for determining the location of the discontinuity, the intensities of radiation emitted from an object are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the density discontinuity and the location .GAMMA. of the attenuation coefficient discontinuity. Pointwise values of the attenuation coefficient within the object need not be known in this case.
A pdf-Free Change Detection Test Based on Density Difference Estimation.
Bu, Li; Alippi, Cesare; Zhao, Dongbin
2018-02-01
The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.
Blind beam-hardening correction from Poisson measurements
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
NASA Astrophysics Data System (ADS)
Poudjom Djomani, Y. H.; Diament, M.; Albouy, Y.
1992-07-01
The Adamawa massif in Central Cameroon is one of the African domal uplifts of volcanic origin. It is an elongated feature, 200 km wide. The gravity anomalies over the Adamawa uplift were studied to determine the mechanical behaviour of the lithosphere. Two approaches were used to analyse six gravity profiles that are 600 km long and that run perpendicular to the Adamawa trend. Firstly, the coherence function between topography and gravity was interpreted; secondly, source depth estimations by spectral analysis of the gravity data was performed. To get significant information for the interpretation of the experimental coherence function, the length of the profiles was varied from 320 km to 600 km. This treatment allows one to obtain numerical estimates of the coherence function. The coherence function analysis points out that the lithosphere is deflected and thin beneath the Adamawa uplift, and the Effective Elastic Thickness is of about 20 km. To fit the coherence, a load from below needs to be taken into account. This result on the Adamawa massif is of the same order of magnitude as those obtained on other African uplifts such as Hoggar, Darfur and Kenya domes. For the depth estimation, three major density contrasts were found: the shallowest depth (4-15 km) can be correlated to shear zone structures and the associated sedimentary basins beneath the uplift; the second density contrast (18-38 km) corresponds to the Moho; and finally, the last depth (70-90 km) would be the top of the upper mantle and demotes the low density zone beneath the Adamawa uplift.
Time-dependent earthquake forecasting: Method and application to the Italian region
NASA Astrophysics Data System (ADS)
Chan, C.; Sorensen, M. B.; Grünthal, G.; Hakimhashemi, A.; Heidbach, O.; Stromeyer, D.; Bosse, C.
2009-12-01
We develop a new approach for time-dependent earthquake forecasting and apply it to the Italian region. In our approach, the seismicity density is represented by a bandwidth function as a smoothing Kernel in the neighboring region of earthquakes. To consider the fault-interaction-based forecasting, we calculate the Coulomb stress change imparted by each earthquake in the study area. From this, the change of seismicity rate as a function of time can be estimated by the concept of rate-and-state stress transfer. We apply our approach to the region of Italy and earthquakes that occurred before 2003 to generate the seismicity density. To validate our approach, we compare our estimated seismicity density with the distribution of earthquakes with M≥3.8 after 2004. A positive correlation is found and all of the examined earthquakes locate in the area of the highest 66 percentile of seismicity density in the study region. Furthermore, the seismicity density corresponding to the epicenter of the 2009 April 6, Mw = 6.3, L’Aquila earthquake is in the area of the highest 5 percentile. For the time-dependent seismicity rate change, we estimate the rate-and-state stress transfer imparted by the M≥5.0 earthquakes occurred in the past 50 years. It suggests that the seismicity rate has increased at the locations of 65% of the examined earthquakes. Applying this approach to the L’Aquila sequence by considering seven M≥5.0 aftershocks as well as the main shock, not only spatial but also temporal forecasting of the aftershock distribution is significant.
Bayesian nonparametric regression with varying residual density
Pati, Debdeep; Dunson, David B.
2013-01-01
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053
Grant M. Domke; Christopher W. Woodall; James E. Smith
2012-01-01
Until recently, standing dead tree biomass and carbon (C) has been estimated as a function of live tree growing stock volume in the U.S. Forest Service, Forest Inventory and Analysis (FIA) Program. Traditional estimates of standing dead tree biomass/C attributes were based on merchantability standards that did not reflect density reductions or structural loss due to...
Rianasari, Ina; de Jong, Michel P.; Huskens, Jurriaan; van der Wiel, Wilfred G.
2013-01-01
We demonstrate the application of the 1,3-dipolar cycloaddition (“click” reaction) to couple gold nanoparticles (Au NPs) functionalized with low densities of functional ligands. The ligand coverage on the citrate-stabilized Au NPs was adjusted by the ligand:Au surface atom ratio, while maintaining the colloidal stability of the Au NPs in aqueous solution. A procedure was developed to determine the driving forces governing the selectivity and reactivity of citrate-stabilized and ligand-functionalized Au NPs on patterned self-assembled monolayers. We observed selective and remarkably stable chemical bonding of the Au NPs to the complimentarily functionalized substrate areas, even when estimating that only 1–2 chemical bonds are formed between the particles and the substrate. PMID:23434666
NASA Astrophysics Data System (ADS)
DeMarco, Adam Ward
The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.
Nematode Damage Functions: The Problems of Experimental and Sampling Error
Ferris, H.
1984-01-01
The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865
Ding, Jiarui; Shah, Sohrab; Condon, Anne
2016-01-01
Motivation: Many biological data processing problems can be formalized as clustering problems to partition data points into sensible and biologically interpretable groups. Results: This article introduces densityCut, a novel density-based clustering algorithm, which is both time- and space-efficient and proceeds as follows: densityCut first roughly estimates the densities of data points from a K-nearest neighbour graph and then refines the densities via a random walk. A cluster consists of points falling into the basin of attraction of an estimated mode of the underlining density function. A post-processing step merges clusters and generates a hierarchical cluster tree. The number of clusters is selected from the most stable clustering in the hierarchical cluster tree. Experimental results on ten synthetic benchmark datasets and two microarray gene expression datasets demonstrate that densityCut performs better than state-of-the-art algorithms for clustering biological datasets. For applications, we focus on the recent cancer mutation clustering and single cell data analyses, namely to cluster variant allele frequencies of somatic mutations to reveal clonal architectures of individual tumours, to cluster single-cell gene expression data to uncover cell population compositions, and to cluster single-cell mass cytometry data to detect communities of cells of the same functional states or types. densityCut performs better than competing algorithms and is scalable to large datasets. Availability and Implementation: Data and the densityCut R package is available from https://bitbucket.org/jerry00/densitycut_dev. Contact: condon@cs.ubc.ca or sshah@bccrc.ca or jiaruid@cs.ubc.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153661
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.
1994-01-01
Modified coupled-pair functional (MCPF) calculations and coupled cluster singles and doubles calculations, which include a perturbational estimate of the connected triples [CCSD(T)], yield a bent structure for CuCO, thus, supporting the prediction of a nonlinear structure based on density functional (DF) calculations. Our best estimate for the binding energy is 4.9 +/- 1.4 kcal/mol; this is in better agreement with experiment (6.0 +/- 1.2 kcal/mol) than the DF approach which yields a value (19.6 kcal/mol) significantly larger than experiment.
Ab initio computation of the transition temperature of the charge density wave transition in TiS e2
NASA Astrophysics Data System (ADS)
Duong, Dinh Loc; Burghard, Marko; Schön, J. Christian
2015-12-01
We present a density functional perturbation theory approach to estimate the transition temperature of the charge density wave transition of TiS e2 . The softening of the phonon mode at the L point where in TiS e2 a giant Kohn anomaly occurs, and the energy difference between the normal and distorted phase are analyzed. Both features are studied as functions of the electronic temperature, which corresponds to the Fermi-Dirac distribution smearing value in the calculation. The transition temperature is found to be 500 and 600 K by phonon and energy analysis, respectively, in reasonable agreement with the experimental value of 200 K.
An ab-initio investigation on SrLa intermetallic compound
NASA Astrophysics Data System (ADS)
Kumar, S. Ramesh; Jaiganesh, G.; Jayalakshmi, V.
2018-05-01
The electronic, elastic and thermodynamic property of CsCl-type SrLa are investigated through density functional theory. The energy-volume relation for this compound has been obtained. The band structure, density of states and charge density in (110) plane are also examined. The elastic constants (C11, C12 and C44) of SrLa is computed, then, using these elastic constants, the bulk moduli, shear moduli, Young's moduli and Poisson's ratio are also derived. The calculated results showed that CsCl-type SrLa is ductile at ambient conditions. The thermodynamic quantities such as free energy, entropy and heat capacity as a function of temperature are estimated and the results obtained are discussed.
Observations of core-mantle boundary Stoneley modes
NASA Astrophysics Data System (ADS)
Koelemeijer, Paula; Deuss, Arwen; Ritsema, Jeroen
2013-06-01
Core-mantle boundary (CMB) Stoneley modes represent a unique class of normal modes with extremely strong sensitivity to wave speed and density variations in the D" region. We measure splitting functions of eight CMB Stoneley modes using modal spectra from 93 events with Mw> 7.4 between 1976 and 2011. The obtained splitting function maps correlate well with the predicted splitting calculated for S20RTS+Crust5.1 structure and the distribution of Sdiff and Pdiff travel time anomalies, suggesting that they are robust. We illustrate how our new CMB Stoneley mode splitting functions can be used to estimate density variations in the Earth's lowermost mantle.
NASA Astrophysics Data System (ADS)
Espinho, S.; Hofmann, S.; Palomares, J. M.; Nijdam, S.
2017-10-01
The aim of this work is to study the properties of Ar-O2 microwave driven surfatron plasmas as a function of the Ar/O2 ratio in the gas mixture. The key parameters are the plasma electron density and electron temperature, which are estimated with Thomson scattering (TS) for O2 contents up to 50% of the total gas flow. A sharp drop in the electron density from {10}20 {{{m}}}-3 to approximately {10}18 {{{m}}}-3 is estimated as the O2 content in the gas mixture is increased up to 15%. For percentages of O2 lower than 10%, the electron temperature is estimated to be about 2-3 times higher than in the case of a pure argon discharge in the same conditions ({T}{{e}}≈ 1 eV) and gradually decreases as the O2 percentage is raised to 50%. However, for O2 percentages above 30%, the scattering spectra become Raman dominated, resulting in large uncertainties in the estimated electron densities and temperatures. The influence of photo-detached electrons from negative ions caused by the typical TS laser fluences is also likely to contribute to the uncertainty in the measured electron densities for high O2 percentages. Moreover, the detection limit of the system is reached for percentages of O2 higher than 25%. Additionally, both the electron density and temperature of microwave discharges with large Ar/O2 ratios are more sensitive to gas pressure variations.
Functional differentiability in time-dependent quantum mechanics.
Penz, Markus; Ruggenthaler, Michael
2015-03-28
In this work, we investigate the functional differentiability of the time-dependent many-body wave function and of derived quantities with respect to time-dependent potentials. For properly chosen Banach spaces of potentials and wave functions, Fréchet differentiability is proven. From this follows an estimate for the difference of two solutions to the time-dependent Schrödinger equation that evolve under the influence of different potentials. Such results can be applied directly to the one-particle density and to bounded operators, and present a rigorous formulation of non-equilibrium linear-response theory where the usual Lehmann representation of the linear-response kernel is not valid. Further, the Fréchet differentiability of the wave function provides a new route towards proving basic properties of time-dependent density-functional theory.
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio
We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less
Mathematical models for non-parametric inferences from line transect data
Burnham, K.P.; Anderson, D.R.
1976-01-01
A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Blackwell, Bradley F; Seamans, Thomas W; White, Randolph J; Patton, Zachary J; Bush, Rachel M; Cepek, Jonathan D
2004-04-01
Oral rabies vaccination (ORV) baiting programs for control of raccoon (Procyon lotor) rabies in the USA have been conducted or are in progress in eight states east of the Mississippi River. However, data specific to the relationship between raccoon population density and the minimum density of baits necessary to significantly elevate rabies immunity are few. We used the 22-km2 US National Aeronautics and Space Administration Plum Brook Station (PBS) in Erie County, Ohio, USA, to evaluate the period of exposure for placebo vaccine baits placed at a density of 75 baits/km2 relative to raccoon population density. Our objectives were to 1) estimate raccoon population density within the fragmented forest, old-field, and industrial landscape at PBS: and 2) quantify the time that placebo, Merial RABORAL V-RG vaccine baits were available to raccoons. From August through November 2002 we surveyed raccoon use of PBS along 19.3 km of paved-road transects by using a forward-looking infrared camera mounted inside a vehicle. We used Distance 3.5 software to calculate a probability of detection function by which we estimated raccoon population density from transect data. Estimated population density on PBS decreased from August (33.4 raccoons/km2) through November (13.6 raccoons/km2), yielding a monthly mean of 24.5 raccoons/km2. We also quantified exposure time for ORV baits placed by hand on five 1-km2 grids on PBS from September through October. An average 82.7% (SD = 4.6) of baits were removed within 1 wk of placement. Given raccoon population density, estimates of bait removal and sachet condition, and assuming 22.9% nontarget take, the baiting density of 75/ km2 yielded an average of 3.3 baits consumed per raccoon and the sachet perforated.
Estimating neuronal connectivity from axonal and dendritic density fields
van Pelt, Jaap; van Ooyen, Arjen
2013-01-01
Neurons innervate space by extending axonal and dendritic arborizations. When axons and dendrites come in close proximity of each other, synapses between neurons can be formed. Neurons vary greatly in their morphologies and synaptic connections with other neurons. The size and shape of the arborizations determine the way neurons innervate space. A neuron may therefore be characterized by the spatial distribution of its axonal and dendritic “mass.” A population mean “mass” density field of a particular neuron type can be obtained by averaging over the individual variations in neuron geometries. Connectivity in terms of candidate synaptic contacts between neurons can be determined directly on the basis of their arborizations but also indirectly on the basis of their density fields. To decide when a candidate synapse can be formed, we previously developed a criterion defining that axonal and dendritic line pieces should cross in 3D and have an orthogonal distance less than a threshold value. In this paper, we developed new methodology for applying this criterion to density fields. We show that estimates of the number of contacts between neuron pairs calculated from their density fields are fully consistent with the number of contacts calculated from the actual arborizations. However, the estimation of the connection probability and the expected number of contacts per connection cannot be calculated directly from density fields, because density fields do not carry anymore the correlative structure in the spatial distribution of synaptic contacts. Alternatively, these two connectivity measures can be estimated from the expected number of contacts by using empirical mapping functions. The neurons used for the validation studies were generated by our neuron simulator NETMORPH. An example is given of the estimation of average connectivity and Euclidean pre- and postsynaptic distance distributions in a network of neurons represented by their population mean density fields. PMID:24324430
Sandman, Antonia Nyström; Näslund, Johan; Gren, Ing-Marie; Norling, Karl
2018-05-05
Macrofaunal activities in sediments modify nutrient fluxes in different ways including the expression of species-specific functional traits and density-dependent population processes. The invasive polychaete genus Marenzelleria was first observed in the Baltic Sea in the 1980s. It has caused changes in benthic processes and affected the functioning of ecosystem services such as nutrient regulation. The large-scale effects of these changes are not known. We estimated the current Marenzelleria spp. wet weight biomass in the Baltic Sea to be 60-87 kton (95% confidence interval). We assessed the potential impact of Marenzelleria spp. on phosphorus cycling using a spatially explicit model, comparing estimates of expected sediment to water phosphorus fluxes from a biophysical model to ecologically relevant experimental measurements of benthic phosphorus flux. The estimated yearly net increases (95% CI) in phosphorous flux due to Marenzelleria spp. were 4.2-6.1 kton based on the biophysical model and 6.3-9.1 kton based on experimental data. The current biomass densities of Marenzelleria spp. in the Baltic Sea enhance the phosphorus fluxes from sediment to water on a sea basin scale. Although high densities of Marenzelleria spp. can increase phosphorus retention locally, such biomass densities are uncommon. Thus, the major effect of Marenzelleria seems to be a large-scale net decrease in the self-cleaning capacity of the Baltic Sea that counteracts human efforts to mitigate eutrophication in the region.
A bias-corrected estimator in multiple imputation for missing data.
Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki
2018-05-29
Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.
Anderson, Alexander S; Marques, Tiago A; Shoo, Luke P; Williams, Stephen E
2015-01-01
Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species.
Anderson, Alexander S.; Marques, Tiago A.; Shoo, Luke P.; Williams, Stephen E.
2015-01-01
Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species. PMID:26110433
Deslauriers, David; Rosburg, Alex J.; Chipps, Steven R.
2017-01-01
We developed a foraging model for young fishes that incorporates handling and digestion rate to estimate daily food consumption. Feeding trials were used to quantify functional feeding response, satiation, and gut evacuation rate. Once parameterized, the foraging model was then applied to evaluate effects of prey type, prey density, water temperature, and fish size on daily feeding rate by age-0 (19–70 mm) pallid sturgeon (Scaphirhynchus albus). Prey consumption was positively related to prey density (for fish >30 mm) and water temperature, but negatively related to prey size and the presence of sand substrate. Model evaluation results revealed good agreement between observed estimates of daily consumption and those predicted by the model (r2 = 0.95). Model simulations showed that fish feeding on Chironomidae or Ephemeroptera larvae were able to gain mass, whereas fish feeding solely on zooplankton lost mass under most conditions. By accounting for satiation and digestive processes in addition to handling time and prey density, the model provides realistic estimates of daily food consumption that can prove useful for evaluating rearing conditions for age-0 fishes.
A cost-efficient method to assess carbon stocks in tropical peat soil
NASA Astrophysics Data System (ADS)
Warren, M. W.; Kauffman, J. B.; Murdiyarso, D.; Anshari, G.; Hergoualc'h, K.; Kurnianto, S.; Purbopuspito, J.; Gusmayanti, E.; Afifudin, M.; Rahajoe, J.; Alhamd, L.; Limin, S.; Iswandi, A.
2012-11-01
Estimation of belowground carbon stocks in tropical wetland forests requires funding for laboratory analyses and suitable facilities, which are often lacking in developing nations where most tropical wetlands are found. It is therefore beneficial to develop simple analytical tools to assist belowground carbon estimation where financial and technical limitations are common. Here we use published and original data to describe soil carbon density (kgC m-3; Cd) as a function of bulk density (gC cm-3; Bd), which can be used to rapidly estimate belowground carbon storage using Bd measurements only. Predicted carbon densities and stocks are compared with those obtained from direct carbon analysis for ten peat swamp forest stands in three national parks of Indonesia. Analysis of soil carbon density and bulk density from the literature indicated a strong linear relationship (Cd = Bd × 495.14 + 5.41, R2 = 0.93, n = 151) for soils with organic C content > 40%. As organic C content decreases, the relationship between Cd and Bd becomes less predictable as soil texture becomes an important determinant of Cd. The equation predicted belowground C stocks to within 0.92% to 9.57% of observed values. Average bulk density of collected peat samples was 0.127 g cm-3, which is in the upper range of previous reports for Southeast Asian peatlands. When original data were included, the revised equation Cd = Bd × 468.76 + 5.82, with R2 = 0.95 and n = 712, was slightly below the lower 95% confidence interval of the original equation, and tended to decrease Cd estimates. We recommend this last equation for a rapid estimation of soil C stocks for well-developed peat soils where C content > 40%.
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
Uncertain Photometric Redshifts with Deep Learning Methods
NASA Astrophysics Data System (ADS)
D'Isanto, A.
2017-06-01
The need for accurate photometric redshifts estimation is a topic that has fundamental importance in Astronomy, due to the necessity of efficiently obtaining redshift information without the need of spectroscopic analysis. We propose a method for determining accurate multi-modal photo-z probability density functions (PDFs) using Mixture Density Networks (MDN) and Deep Convolutional Networks (DCN). A comparison with a Random Forest (RF) is performed.
NASA Astrophysics Data System (ADS)
Kaneko, Masashi; Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru
2017-11-01
The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for 99Ru and 189Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both 99Ru and 189Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of Δ R/ R, which is an important nuclear constant, for 99Ru and 189Os nuclides by using the benchmark results. The sign of the calculated Δ R/ R values is consistent with the predicted data for 99Ru and 189Os. We obtain computationally the Δ R/ R values of 99Ru and 189Os (36.2 keV) as 2.35×10-4 and -0.20×10-4, respectively, at B3LYP level for SARC basis set.
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
Quantifying confidence in density functional theory predictions of magnetic ground states
NASA Astrophysics Data System (ADS)
Houchins, Gregory; Viswanathan, Venkatasubramanian
2017-10-01
Density functional theory (DFT) simulations, at the generalized gradient approximation (GGA) level, are being routinely used for material discovery based on high-throughput descriptor-based searches. The success of descriptor-based material design relies on eliminating bad candidates and keeping good candidates for further investigation. While DFT has been widely successfully for the former, oftentimes good candidates are lost due to the uncertainty associated with the DFT-predicted material properties. Uncertainty associated with DFT predictions has gained prominence and has led to the development of exchange correlation functionals that have built-in error estimation capability. In this work, we demonstrate the use of built-in error estimation capabilities within the BEEF-vdW exchange correlation functional for quantifying the uncertainty associated with the magnetic ground state of solids. We demonstrate this approach by calculating the uncertainty estimate for the energy difference between the different magnetic states of solids and compare them against a range of GGA exchange correlation functionals as is done in many first-principles calculations of materials. We show that this estimate reasonably bounds the range of values obtained with the different GGA functionals. The estimate is determined as a postprocessing step and thus provides a computationally robust and systematic approach to estimating uncertainty associated with predictions of magnetic ground states. We define a confidence value (c-value) that incorporates all calculated magnetic states in order to quantify the concurrence of the prediction at the GGA level and argue that predictions of magnetic ground states from GGA level DFT is incomplete without an accompanying c-value. We demonstrate the utility of this method using a case study of Li-ion and Na-ion cathode materials and the c-value metric correctly identifies that GGA-level DFT will have low predictability for NaFePO4F . Further, there needs to be a systematic test of a collection of plausible magnetic states, especially in identifying antiferromagnetic (AFM) ground states. We believe that our approach of estimating uncertainty can be readily incorporated into all high-throughput computational material discovery efforts and this will lead to a dramatic increase in the likelihood of finding good candidate materials.
Measuring the economic effects of Japan's Mikawa Port: Pre- and-post disaster assessments
NASA Astrophysics Data System (ADS)
Shibusawa, Hiroyuki; Miyata, Yuzuru
2017-10-01
This study examines the economic effects of Japan's Mikawa Port on Aichi Prefecture before and after a natural disaster interrupts its operations for one year. Using a regional input-output model, backward and forward linkage impacts are calculated along the waterfront where the auto industry is concentrated. In addition, economic damage from natural disasters is estimated. We assess the economic implications on the hinterland of Mikawa Port. Density functions of the backward and forward linkage impacts are derived. A production stoppage along the waterfront of Mikawa Port generates large indirect negative effects on the regional economy. Results found that density functions of the total impacts are decreasing function of distance but that several sectors are characterized by non-decreasing functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.
A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less
Comparison of volatility function technique for risk-neutral densities estimation
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chen, Chien-Chang; Juan, Hung-Hui; Tsai, Meng-Yuan; Lu, Henry Horng-Shing
2018-01-11
By introducing the methods of machine learning into the density functional theory, we made a detour for the construction of the most probable density function, which can be estimated by learning relevant features from the system of interest. Using the properties of universal functional, the vital core of density functional theory, the most probable cluster numbers and the corresponding cluster boundaries in a studying system can be simultaneously and automatically determined and the plausibility is erected on the Hohenberg-Kohn theorems. For the method validation and pragmatic applications, interdisciplinary problems from physical to biological systems were enumerated. The amalgamation of uncharged atomic clusters validated the unsupervised searching process of the cluster numbers and the corresponding cluster boundaries were exhibited likewise. High accurate clustering results of the Fisher's iris dataset showed the feasibility and the flexibility of the proposed scheme. Brain tumor detections from low-dimensional magnetic resonance imaging datasets and segmentations of high-dimensional neural network imageries in the Brainbow system were also used to inspect the method practicality. The experimental results exhibit the successful connection between the physical theory and the machine learning methods and will benefit the clinical diagnoses.
Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…
Lewis Jordan; Ray Souter; Bernard Parresol; Richard F. Daniels
2006-01-01
Biomass estimation is critical for looking at ecosystem processes and as a measure of stand yield. The density-integral approach allows for coincident estimation of stem profile and biomass. The algebraic difference approach (ADA) permits the derivation of dynamic or nonstatic functions. In this study we applied the ADA to develop a self-referencing specific gravity...
Assessing a learning process with functional ANOVA estimators of EEG power spectral densities.
Gutiérrez, David; Ramírez-Moreno, Mauricio A
2016-04-01
We propose to assess the process of learning a task using electroencephalographic (EEG) measurements. In particular, we quantify changes in brain activity associated to the progression of the learning experience through the functional analysis-of-variances (FANOVA) estimators of the EEG power spectral density (PSD). Such functional estimators provide a sense of the effect of training in the EEG dynamics. For that purpose, we implemented an experiment to monitor the process of learning to type using the Colemak keyboard layout during a twelve-lessons training. Hence, our aim is to identify statistically significant changes in PSD of various EEG rhythms at different stages and difficulty levels of the learning process. Those changes are taken into account only when a probabilistic measure of the cognitive state ensures the high engagement of the volunteer to the training. Based on this, a series of statistical tests are performed in order to determine the personalized frequencies and sensors at which changes in PSD occur, then the FANOVA estimates are computed and analyzed. Our experimental results showed a significant decrease in the power of [Formula: see text] and [Formula: see text] rhythms for ten volunteers during the learning process, and such decrease happens regardless of the difficulty of the lesson. These results are in agreement with previous reports of changes in PSD being associated to feature binding and memory encoding.
3D depth-to-basement and density contrast estimates using gravity and borehole data
NASA Astrophysics Data System (ADS)
Barbosa, V. C.; Martins, C. M.; Silva, J. B.
2009-05-01
We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.
Early Universe synthesis of asymmetric dark matter nuggets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.
We compute the mass function of bound states of asymmetric dark matter - nuggets - synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.
Early Universe synthesis of asymmetric dark matter nuggets
Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.
2018-02-12
We compute the mass function of bound states of asymmetric dark matter - nuggets - synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.
Early Universe synthesis of asymmetric dark matter nuggets
NASA Astrophysics Data System (ADS)
Gresham, Moira I.; Lou, Hou Keong; Zurek, Kathryn M.
2018-02-01
We compute the mass function of bound states of asymmetric dark matter—nuggets—synthesized in the early Universe. We apply our results for the nugget density and binding energy computed from a nuclear model to obtain analytic estimates of the typical nugget size exiting synthesis. We numerically solve the Boltzmann equation for synthesis including two-to-two fusion reactions, estimating the impact of bottlenecks on the mass function exiting synthesis. These results provide the basis for studying the late Universe cosmology of nuggets in a future companion paper.
Grant M. Domke; Christopher W. Woodall; James E. Smith
2011-01-01
Standing dead trees are one component of forest ecosystem dead wood carbon (C) pools, whose national stock is estimated by the U.S. as required by the United Nations Framework Convention on Climate Change. Historically, standing dead tree C has been estimated as a function of live tree growing stock volume in the U.S.'s National Greenhouse Gas Inventory. Initiated...
Density functional theory and chromium: Insights from the dimers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Würdemann, Rolf; Kristoffersen, Henrik H.; Moseler, Michael
2015-03-28
The binding in small Cr clusters is re-investigated, where the correct description of the dimer in three charge states is used as criterion to assign the most suitable density functional theory approximation. The difficulty in chromium arises from the subtle interplay between energy gain from hybridization and energetic cost due to exchange between s and d based molecular orbitals. Variations in published bond lengths and binding energies are shown to arise from insufficient numerical representation of electron density and Kohn-Sham wave-functions. The best functional performance is found for gradient corrected (GGA) functionals and meta-GGAs, where we find severe differences betweenmore » functionals from the same family due to the importance of exchange. Only the “best fit” from Bayesian error estimation is able to predict the correct energetics for all three charge states unambiguously. With this knowledge, we predict small bond-lengths to be exclusively present in Cr{sub 2} and Cr{sub 2}{sup −}. Already for the dimer cation, solely long bond-lengths appear, similar to what is found in the trimer and in chromium bulk.« less
NASA Astrophysics Data System (ADS)
Marks, Michael; Kroupa, Pavel; Dabringhausen, Jörg; Pawlowski, Marcel S.
2012-05-01
Residual-gas expulsion after cluster formation has recently been shown to leave an imprint in the low-mass present-day stellar mass function (PDMF) which allowed the estimation of birth conditions of some Galactic globular clusters (GCs) such as mass, radius and star formation efficiency. We show that in order to explain their characteristics (masses, radii, metallicity and PDMF) their stellar initial mass function (IMF) must have been top heavy. It is found that the IMF is required to become more top heavy the lower the cluster metallicity and the larger the pre-GC cloud-core density are. The deduced trends are in qualitative agreement with theoretical expectation. The results are consistent with estimates of the shape of the high-mass end of the IMF in the Arches cluster, Westerlund 1, R136 and NGC 3603, as well as with the IMF independently constrained for ultra-compact dwarf galaxies (UCDs). The latter suggests that GCs and UCDs might have formed along the same channel or that UCDs formed via mergers of GCs. A Fundamental Plane is found which describes the variation of the IMF with density and metallicity of the pre-GC cloud cores. The implications for the evolution of galaxies and chemical enrichment over cosmological times are expected to be major.
NASA Astrophysics Data System (ADS)
Hoogenboom, M.; Beraud, E.; Ferrier-Pagès, C.
2010-03-01
This study quantified variation in net photosynthetic carbon gain in response to natural fluctuations in symbiont density for the Mediterranean coral Cladocora caespitosa, and evaluated which density maximized photosynthetic carbon acquisition. To do this, carbon acquisition was modeled as an explicit function of symbiont density. The model was parameterized using measurements of rates of photosynthesis and respiration for small colonies with a broad range of zooxanthella concentrations. Results demonstrate that rates of net photosynthesis increase asymptotically with symbiont density, whereas rates of respiration increase linearly. In combination, these functional responses meant that colony energy acquisition decreased at both low and at very high zooxanthella densities. However, there was a wide range of symbiont densities for which net daily photosynthesis was approximately equivalent. Therefore, significant changes in symbiont density do not necessarily cause a change in autotrophic energy acquisition by the colony. Model estimates of the optimal range of cell densities corresponded well with independent observations of symbiont concentrations obtained from field and laboratory studies of healthy colonies. Overall, this study demonstrates that the seasonal fluctuations, in symbiont numbers observed in healthy colonies of the Mediterranean coral investigated, do not have a strong effect on photosynthetic energy acquisition.
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
First-principles calculation of the reflectance of shock-compressed xenon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norman, G. E.; Saitov, I. M., E-mail: saitovilnur@gmail.com; Stegailov, V. V.
2015-05-15
Within electron density functional theory (DFT), the reflectance of radiation from shock-compressed xenon plasma is calculated. The dependence of the reflectance on the frequency of the incident radiation and on the plasma density is considered. The Fresnel formula is used. The expression for the longitudinal dielectric tensor in the long-wavelength limit is used to calculate the imaginary part of the dielectric function (DF). The real part of the DF is determined by the Kramers-Kronig transformation. The results are compared with experimental data. An approach is proposed to estimate the plasma frequency in shock-compressed xenon.
NASA Astrophysics Data System (ADS)
Crespo Campo, L.; Bello Garrote, F. L.; Eriksen, T. K.; Görgen, A.; Guttormsen, M.; Hadynska-Klek, K.; Klintefjord, M.; Larsen, A. C.; Renstrøm, T.; Sahin, E.; Siem, S.; Springer, A.; Tornyi, T. G.; Tveten, G. M.
2016-10-01
Particle-γ coincidence data have been analyzed to obtain the nuclear level density and the γ -strength function of 64Ni by means of the Oslo method. The level density found in this work is in very good agreement with known energy levels at low excitation energies as well as with data deduced from particle-evaporation measurements at excitation energies above Ex≈5.5 MeV. The experimental γ -strength function presents an enhancement at γ energies below Eγ≈3 MeV and possibly a resonancelike structure centered at Eγ≈9.2 MeV. The obtained nuclear level density and γ -strength function have been used to estimate the (n ,γ ) cross section for the s -process branch-point nucleus 63Ni, of particular interest for astrophysical calculations of elemental abundances.
NASA Technical Reports Server (NTRS)
Lee, T.; Boland, D. F., Jr.
1980-01-01
This document presents the results of an extensive survey and comparative evaluation of current atmosphere and wind models for inclusion in the Langley Atmospheric Information Retrieval System (LAIRS). It includes recommended models for use in LAIRS, estimated accuracies for the recommended models, and functional specifications for the development of LAIRS.
Multivariate Epi-splines and Evolving Function Identification Problems
2015-04-15
such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction
Integrating resource selection into spatial capture-recapture models for large carnivores
Proffitt, Kelly M.; Goldberg, Joshua; Hebblewite, Mark; Russell, Robin E.; Jimenez, Ben; Robinson, Hugh S.; Pilgrim, Kristine; Schwartz, Michael K.
2015-01-01
Wildlife managers need reliable methods to estimate large carnivore densities and population trends; yet large carnivores are elusive, difficult to detect, and occur at low densities making traditional approaches intractable. Recent advances in spatial capture-recapture (SCR) models have provided new approaches for monitoring trends in wildlife abundance and these methods are particularly applicable to large carnivores. We applied SCR models in a Bayesian framework to estimate mountain lion densities in the Bitterroot Mountains of west central Montana. We incorporate an existing resource selection function (RSF) as a density covariate to account for heterogeneity in habitat use across the study area and include data collected from harvested lions. We identify individuals through DNA samples collected by (1) biopsy darting mountain lions detected in systematic surveys of the study area, (2) opportunistically collecting hair and scat samples, and (3) sampling all harvested mountain lions. We included 80 DNA samples collected from 62 individuals in the analysis. Including information on predicted habitat use as a covariate on the distribution of activity centers reduced the median estimated density by 44%, the standard deviation by 7%, and the width of 95% credible intervals by 10% as compared to standard SCR models. Within the two management units of interest, we estimated a median mountain lion density of 4.5 mountain lions/100 km2 (95% CI = 2.9, 7.7) and 5.2 mountain lions/100 km2 (95% CI = 3.4, 9.1). Including harvested individuals (dead recovery) did not create a significant bias in the detection process by introducing individuals that could not be detected after removal. However, the dead recovery component of the model did have a substantial effect on results by increasing sample size. The ability to account for heterogeneity in habitat use provides a useful extension to SCR models, and will enhance the ability of wildlife managers to reliably and economically estimate density of wildlife populations, particularly large carnivores.
Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering
NASA Technical Reports Server (NTRS)
Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)
2001-01-01
Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Indirect Validation of Probe Speed Data on Arterial Corridors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eshragh, Sepideh; Young, Stanley E.; Sharifi, Elham
This study aimed to estimate the accuracy of probe speed data on arterial corridors on the basis of roadway geometric attributes and functional classification. It was assumed that functional class (medium and low) along with other road characteristics (such as weighted average of the annual average daily traffic, average signal density, average access point density, and average speed) were available as correlation factors to estimate the accuracy of probe traffic data. This study tested these factors as predictors of the fidelity of probe traffic data by using the results of an extensive validation exercise. This study showed strong correlations betweenmore » these geometric attributes and the accuracy of probe data when they were assessed by using average absolute speed error. Linear models were regressed to existing data to estimate appropriate models for medium- and low-type arterial corridors. The proposed models for medium- and low-type arterials were validated further on the basis of the results of a slowdown analysis. These models can be used to predict the accuracy of probe data indirectly in medium and low types of arterial corridors.« less
Identification of modal parameters including unmeasured forces and transient effects
NASA Astrophysics Data System (ADS)
Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.
2003-08-01
In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
The scaling of contact rates with population density for the infectious disease models.
Hu, Hao; Nigmatulina, Karima; Eckhoff, Philip
2013-08-01
Contact rates and patterns among individuals in a geographic area drive transmission of directly-transmitted pathogens, making it essential to understand and estimate contacts for simulation of disease dynamics. Under the uniform mixing assumption, one of two mechanisms is typically used to describe the relation between contact rate and population density: density-dependent or frequency-dependent. Based on existing evidence of population threshold and human mobility patterns, we formulated a spatial contact model to describe the appropriate form of transmission with initial growth at low density and saturation at higher density. We show that the two mechanisms are extreme cases that do not capture real population movement across all scales. Empirical data of human and wildlife diseases indicate that a nonlinear function may work better when looking at the full spectrum of densities. This estimation can be applied to large areas with population mixing in general activities. For crowds with unusually large densities (e.g., transportation terminals, stadiums, or mass gatherings), the lack of organized social contact structure deviates the physical contacts towards a special case of the spatial contact model - the dynamics of kinetic gas molecule collision. In this case, an ideal gas model with van der Waals correction fits well; existing movement observation data and the contact rate between individuals is estimated using kinetic theory. A complete picture of contact rate scaling with population density may help clarify the definition of transmission rates in heterogeneous, large-scale spatial systems. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Inference about density and temporary emigration in unmarked populations
Chandler, Richard B.; Royle, J. Andrew; King, David I.
2011-01-01
Few species are distributed uniformly in space, and populations of mobile organisms are rarely closed with respect to movement, yet many models of density rely upon these assumptions. We present a hierarchical model allowing inference about the density of unmarked populations subject to temporary emigration and imperfect detection. The model can be fit to data collected using a variety of standard survey methods such as repeated point counts in which removal sampling, double-observer sampling, or distance sampling is used during each count. Simulation studies demonstrated that parameter estimators are unbiased when temporary emigration is either "completely random" or is determined by the size and location of home ranges relative to survey points. We also applied the model to repeated removal sampling data collected on Chestnut-sided Warblers (Dendroica pensylvancia) in the White Mountain National Forest, USA. The density estimate from our model, 1.09 birds/ha, was similar to an estimate of 1.11 birds/ha produced by an intensive spot-mapping effort. Our model is also applicable when processes other than temporary emigration affect the probability of being available for detection, such as in studies using cue counts. Functions to implement the model have been added to the R package unmarked.
Ge, Zhenpeng; Wang, Yi
2017-04-20
Molecular dynamics simulations of nanoparticles (NPs) are increasingly used to study their interactions with various biological macromolecules. Such simulations generally require detailed knowledge of the surface composition of the NP under investigation. Even for some well-characterized nanoparticles, however, this knowledge is not always available. An example is nanodiamond, a nanoscale diamond particle with surface dominated by oxygen-containing functional groups. In this work, we explore using the harmonic restraint method developed by Venable et al., to estimate the surface charge density (σ) of nanodiamonds. Based on the Gouy-Chapman theory, we convert the experimentally determined zeta potential of a nanodiamond to an effective charge density (σ eff ), and then use the latter to estimate σ via molecular dynamics simulations. Through scanning a series of nanodiamond models, we show that the above method provides a straightforward protocol to determine the surface charge density of relatively large (> ∼100 nm) NPs. Overall, our results suggest that despite certain limitation, the above protocol can be readily employed to guide the model construction for MD simulations, which is particularly useful when only limited experimental information on the NP surface composition is available to a modeler.
The potential role of perivascular lymphatic vessels in preservation of kidney allograft function.
Tsuchimoto, Akihiro; Nakano, Toshiaki; Hasegawa, Shoko; Masutani, Kosuke; Matsukuma, Yuta; Eriguchi, Masahiro; Nagata, Masaharu; Nishiki, Takehiro; Kitada, Hidehisa; Tanaka, Masao; Kitazono, Takanari; Tsuruya, Kazuhiko
2017-08-01
Lymphangiogenesis occurs in diseased native kidneys and kidney allografts, and correlates with histological injury; however, the clinical significance of lymphatic vessels in kidney allografts is unclear. This study retrospectively reviewed 63 kidney transplant patients who underwent protocol biopsies. Lymphatic vessels were identified by immunohistochemical staining for podoplanin, and were classified according to their location as perivascular or interstitial lymphatic vessels. The associations between perivascular lymphatic density and kidney allograft function and pathological findings were analyzed. There were no significant differences in perivascular lymphatic densities in kidney allograft biopsy specimens obtained at 0 h, 3 months and 12 months. The groups with higher perivascular lymphatic density showed a lower proportion of progression of interstitial fibrosis/tubular atrophy grade from 3 to 12 months (P for trend = 0.039). Perivascular lymphatic density was significantly associated with annual decline of estimated glomerular filtration rate after 12 months (r = -0.31, P = 0.017), even after adjusting for multiple confounders (standardized β = -0.30, P = 0.019). High perivascular lymphatic density is associated with favourable kidney allograft function. The perivascular lymphatic network may be involved in inhibition of allograft fibrosis and stabilization of graft function.
Staid, Andrea; Watson, Jean -Paul; Wets, Roger J. -B.; ...
2017-07-11
Forecasts of available wind power are critical in key electric power systems operations planning problems, including economic dispatch and unit commitment. Such forecasts are necessarily uncertain, limiting the reliability and cost effectiveness of operations planning models based on a single deterministic or “point” forecast. A common approach to address this limitation involves the use of a number of probabilistic scenarios, each specifying a possible trajectory of wind power production, with associated probability. We present and analyze a novel method for generating probabilistic wind power scenarios, leveraging available historical information in the form of forecasted and corresponding observed wind power timemore » series. We estimate non-parametric forecast error densities, specifically using epi-spline basis functions, allowing us to capture the skewed and non-parametric nature of error densities observed in real-world data. We then describe a method to generate probabilistic scenarios from these basis functions that allows users to control for the degree to which extreme errors are captured.We compare the performance of our approach to the current state-of-the-art considering publicly available data associated with the Bonneville Power Administration, analyzing aggregate production of a number of wind farms over a large geographic region. Finally, we discuss the advantages of our approach in the context of specific power systems operations planning problems: stochastic unit commitment and economic dispatch. Here, our methodology is embodied in the joint Sandia – University of California Davis Prescient software package for assessing and analyzing stochastic operations strategies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staid, Andrea; Watson, Jean -Paul; Wets, Roger J. -B.
Forecasts of available wind power are critical in key electric power systems operations planning problems, including economic dispatch and unit commitment. Such forecasts are necessarily uncertain, limiting the reliability and cost effectiveness of operations planning models based on a single deterministic or “point” forecast. A common approach to address this limitation involves the use of a number of probabilistic scenarios, each specifying a possible trajectory of wind power production, with associated probability. We present and analyze a novel method for generating probabilistic wind power scenarios, leveraging available historical information in the form of forecasted and corresponding observed wind power timemore » series. We estimate non-parametric forecast error densities, specifically using epi-spline basis functions, allowing us to capture the skewed and non-parametric nature of error densities observed in real-world data. We then describe a method to generate probabilistic scenarios from these basis functions that allows users to control for the degree to which extreme errors are captured.We compare the performance of our approach to the current state-of-the-art considering publicly available data associated with the Bonneville Power Administration, analyzing aggregate production of a number of wind farms over a large geographic region. Finally, we discuss the advantages of our approach in the context of specific power systems operations planning problems: stochastic unit commitment and economic dispatch. Here, our methodology is embodied in the joint Sandia – University of California Davis Prescient software package for assessing and analyzing stochastic operations strategies.« less
NASA Astrophysics Data System (ADS)
Nebuya, Satoru; Koike, Tomotaka; Imai, Hiroshi; Noshiro, Makoto; Brown, Brian H.; Soma, Kazui
2010-04-01
The consistency of regional lung density measurements as estimated by Electrical Impedance Tomography (EIT), in eleven patients supported by a mechanical ventilator, was validated to verify the feasibility of its use in intensive care medicine. There were significant differences in regional lung densities between the normal lung and diseased lungs associated with pneumonia, atelectasis and pleural effusion (Steel-Dwass test, p < 0.05). Temporal changes in regional lung density of patients with atelectasis were observed to be in good agreement with the results of clinical diagnosis. These results indicate that it is feasible to obtain a quantitative value for regional lung density using EIT.
Luminosity and Stellar Mass Functions from the 6dF Galaxy Survey
NASA Astrophysics Data System (ADS)
Colless, M.; Jones, D. H.; Peterson, B. A.; Campbell, L.; Saunders, W.; Lah, P.
2007-12-01
The completed 6dF Galaxy Survey includes redshifts for over 124,000 galaxies. We present luminosity functions in optical and near-infrared passbands that span a range of 10^4 in luminosity. These luminosity functions show systematic deviations from the Schechter form. The corresponding luminosity densities in the optical and near-infrared are consistent with an old stellar population and a moderately declining star formation rate. Stellar mass functions, derived from the K band luminosities and simple stellar population models selected by b_J-r_F colour, lead to an estimate of the present-day stellar mass density of ρ_* = (5.00 ± 0.11) × 10^8 h M_⊙ Mpc^{-3}, corresponding to Ω_* h = (1.80 ± 0.04) × 10^{-3}.
A New Determination of the Luminosity Function of the Galactic Halo.
NASA Astrophysics Data System (ADS)
Dawson, Peter Charles
The luminosity function of the galactic halo is determined by subtracting from the observed numbers of proper motion stars in the LHS Catalogue the expected numbers of main-sequence, degenerate, and giant stars of the disk population. Selection effects are accounted for by Monte Carlo simulations based upon realistic colour-luminosity relations and kinematic models. The catalogue is shown to be highly complete, and a calibration of the magnitude estimates therein is presented. It is found that, locally, the ratio of disk to halo material is close to 950, and that the mass density in main sequence and subgiant halo stars with 3 < M(,v) < 14 is about 2 x 10('-5) M(,o) pc('-3). With due allowance for white dwarfs and binaries, and taking into account the possibility of a moderate rate of halo rotation, it is argued that the total density does not much exceed 5 x 10('-5) M(,o) pc('-3), in which case the total mass interior to the sun is of the order of 5 x 10('8) M(,o) for a density distribution which projects to a de Vaucouleurs r(' 1/4) law. It is demonstrated that if the Wielen luminosity function is a faithful representation of the stellar distribution in the solar neighbourhood, then the observed numbers of large proper motion stars are inconsistent with the presence of an intermediate popula- tion at the level, and with the kinematics advocated recently by Gilmore and Reid. The initial mass function (IMF) of the halo is considered, and weak evidence is presented that its slope is at least not shallower than that of the disk population IMF. A crude estimate of the halo's age, based on a comparison of the main sequence turnoff in the reduced proper motion diagram with theoretical models is obtained; a tentative lower limit is 15 Gyr with a best estimate of between 15 and 18 Gyr. Finally, the luminosity function obtained here is compared with those determined in other investigations.
Environment of Submillimeter Galaxies
NASA Astrophysics Data System (ADS)
Hou, K.-c.; Chen, L.-w.
2013-10-01
To study the environment of high-redshift star-forming galaxies — submillimeter galaxies (SMGs) — and their role during large-scale structure formation, we have estimated the galaxy number density fluctuations around SMGs, and analyzed their cross correlation functions with Lyman alpha emitters (LAEs), and optical-selected galaxies with photometric redshift in the COSMOS and ECDFS fields. Only a marginal cross-correlation between SMGs and optical-selected galaxies at most redshifts intervals is found in our results, except a relatively strong correlation detected in the cases of AzTEC-detected SMGs with galaxies at z ˜2.6 and 3.6. The density fluctuations around SMGs with redshift estimated show most SMGs located in a high-density region. There is no correlation signal between LAEs and SMGs, and the galaxy density fluctuations indicate a slightly anti-correlation on a scale smaller than 2 Mpc. Furthermore, we also investigate the density fluctuations of passive and starforming galaxies selected by optical and near infrared colors at similar redshift around SMGs. Finally the implication from our results to the interconnection between high-redshift galaxy populations is discussed.
Michalek, Lukas; Barner, Leonie; Barner-Kowollik, Christopher
2018-03-07
Well-defined polymer strands covalently tethered onto solid substrates determine the properties of the resulting functional interface. Herein, the current approaches to determine quantitative grafting densities are assessed. Based on a brief introduction into the key theories describing polymer brush regimes, a user's guide is provided to estimating maximum chain coverage and-importantly-examine the most frequently employed approaches for determining grafting densities, i.e., dry thickness measurements, gravimetric assessment, and swelling experiments. An estimation of the reliability of these determination methods is provided via carefully evaluating their assumptions and assessing the stability of the underpinning equations. A practical access guide for comparatively and quantitatively evaluating the reliability of a given approach is thus provided, enabling the field to critically judge experimentally determined grafting densities and to avoid the reporting of grafting densities that fall outside the physically realistic parameter space. The assessment is concluded with a perspective on the development of advanced approaches for determination of grafting density, in particular, on single-chain methodologies. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Plasma dynamics near critical density inferred from direct measurements of laser hole boring
NASA Astrophysics Data System (ADS)
Gong, Chao; Tochitsky, Sergei Ya.; Fiuza, Frederico; Pigeon, Jeremy J.; Joshi, Chan
2016-06-01
We have used multiframe picosecond optical interferometry to make direct measurements of the hole boring velocity, vHB, of the density cavity pushed forward by a train of C O2 laser pulses in a near critical density helium plasma. As the pulse train intensity rises, the increasing radiation pressure of each pulse pushes the density cavity forward and the plasma electrons are strongly heated. After the peak laser intensity, the plasma pressure exerted by the heated electrons strongly impedes the hole boring process and the vHB falls rapidly as the laser pulse intensity falls at the back of the laser pulse train. A heuristic theory is presented that allows the estimation of the plasma electron temperature from the measurements of the hole boring velocity. The measured values of vHB, and the estimated values of the heated electron temperature as a function of laser intensity are in reasonable agreement with those obtained from two-dimensional numerical simulations.
Plasma dynamics near critical density inferred from direct measurements of laser hole boring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Chao; Tochitsky, Sergei Ya.; Fiuza, Frederico
Here, we use multiframe picosecond optical interferometry to make direct measurements of the hole boring velocity, vHB, of the density cavity pushed forward by a train of CO 2 laser pulses in a near critical density helium plasma. As the pulse train intensity rises, the increasing radiation pressure of each pulse pushes the density cavity forward and the plasma electrons are strongly heated. After the peak laser intensity, the plasma pressure exerted by the heated electrons strongly impedes the hole boring process and the vHB falls rapidly as the laser pulse intensity falls at the back of the laser pulsemore » train. We present a heuristic theory that allows the estimation of the plasma electron temperature from the measurements of the hole boring velocity. Furthermore, the measured values of v HB, and the estimated values of the heated electron temperature as a function of laser intensity are in reasonable agreement with those obtained from two-dimensional numerical simulations.« less
The spatial distribution of fixed mutations within genes coding for proteins
NASA Technical Reports Server (NTRS)
Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.
1983-01-01
An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.
Plasma dynamics near critical density inferred from direct measurements of laser hole boring.
Gong, Chao; Tochitsky, Sergei Ya; Fiuza, Frederico; Pigeon, Jeremy J; Joshi, Chan
2016-06-01
We have used multiframe picosecond optical interferometry to make direct measurements of the hole boring velocity, v_{HB}, of the density cavity pushed forward by a train of CO_{2} laser pulses in a near critical density helium plasma. As the pulse train intensity rises, the increasing radiation pressure of each pulse pushes the density cavity forward and the plasma electrons are strongly heated. After the peak laser intensity, the plasma pressure exerted by the heated electrons strongly impedes the hole boring process and the v_{HB} falls rapidly as the laser pulse intensity falls at the back of the laser pulse train. A heuristic theory is presented that allows the estimation of the plasma electron temperature from the measurements of the hole boring velocity. The measured values of v_{HB}, and the estimated values of the heated electron temperature as a function of laser intensity are in reasonable agreement with those obtained from two-dimensional numerical simulations.
Plasma dynamics near critical density inferred from direct measurements of laser hole boring
Gong, Chao; Tochitsky, Sergei Ya.; Fiuza, Frederico; ...
2017-06-24
Here, we use multiframe picosecond optical interferometry to make direct measurements of the hole boring velocity, vHB, of the density cavity pushed forward by a train of CO 2 laser pulses in a near critical density helium plasma. As the pulse train intensity rises, the increasing radiation pressure of each pulse pushes the density cavity forward and the plasma electrons are strongly heated. After the peak laser intensity, the plasma pressure exerted by the heated electrons strongly impedes the hole boring process and the vHB falls rapidly as the laser pulse intensity falls at the back of the laser pulsemore » train. We present a heuristic theory that allows the estimation of the plasma electron temperature from the measurements of the hole boring velocity. Furthermore, the measured values of v HB, and the estimated values of the heated electron temperature as a function of laser intensity are in reasonable agreement with those obtained from two-dimensional numerical simulations.« less
Estimates of Stellar Weak Interaction Rates for Nuclei in the Mass Range A=65-80
NASA Astrophysics Data System (ADS)
Pruet, Jason; Fuller, George M.
2003-11-01
We estimate lepton capture and emission rates, as well as neutrino energy loss rates, for nuclei in the mass range A=65-80. These rates are calculated on a temperature/density grid appropriate for a wide range of astrophysical applications including simulations of late time stellar evolution and X-ray bursts. The basic inputs in our single-particle and empirically inspired model are (i) experimentally measured level information, weak transition matrix elements, and lifetimes, (ii) estimates of matrix elements for allowed experimentally unmeasured transitions based on the systematics of experimentally observed allowed transitions, and (iii) estimates of the centroids of the GT resonances motivated by shell model calculations in the fp shell as well as by (n, p) and (p, n) experiments. Fermi resonances (isobaric analog states) are also included, and it is shown that Fermi transitions dominate the rates for most interesting proton-rich nuclei for which an experimentally determined ground state lifetime is unavailable. For the purposes of comparing our results with more detailed shell model based calculations we also calculate weak rates for nuclei in the mass range A=60-65 for which Langanke & Martinez-Pinedo have provided rates. The typical deviation in the electron capture and β-decay rates for these ~30 nuclei is less than a factor of 2 or 3 for a wide range of temperature and density appropriate for presupernova stellar evolution. We also discuss some subtleties associated with the partition functions used in calculations of stellar weak rates and show that the proper treatment of the partition functions is essential for estimating high-temperature β-decay rates. In particular, we show that partition functions based on unconverged Lanczos calculations can result in errors in estimates of high-temperature β-decay rates.
Gaussian polarizable-ion tight binding.
Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P
2016-10-14
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Gaussian polarizable-ion tight binding
NASA Astrophysics Data System (ADS)
Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.
2016-10-01
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alarcón, J. M.; Hiller Blin, A. N.; Vicente Vacas, M. J.
2017-05-08
The baryon electromagnetic form factors are expressed in terms of two-dimensional densities describing the distribution of charge and magnetization in transverse space at fixed light-front time. In this paper, we calculate the transverse densities of the spin-1/2 flavor-octet baryons at peripheral distances b=O(Mmore » $$-1\\atop{π}$$) using methods of relativistic chiral effective field theory (χ EFT) and dispersion analysis. The densities are represented as dispersive integrals over the imaginary parts of the form factors in the timelike region (spectral functions). The isovector spectral functions on the two-pion cut t > 4 M$$2\\atop{π}$$ are calculated using relativistic χEFT including octet and decuplet baryons. The χEFT calculations are extended into the ρ meson mass region using an N/D method that incorporates the pion electromagnetic form factor data. The isoscalar spectral functions are modeled by vector meson poles. We compute the peripheral charge and magnetization densities in the octet baryon states, estimate the uncertainties, and determine the quark flavor decomposition. Finally, the approach can be extended to baryon form factors of other operators and the moments of generalized parton distributions.« less
NASA Astrophysics Data System (ADS)
Settar, Abdelhakim; Abboudi, Saïd; Madani, Brahim; Nebbali, Rachid
2018-02-01
Due to the endothermic nature of the steam methane reforming reaction, the process is often limited by the heat transfer behavior in the reactors. Poor thermal behavior sometimes leads to slow reaction kinetics, which is characterized by the presence of cold spots in the catalytic zones. Within this framework, the present work consists on a numerical investigation, in conjunction with an experimental one, on the one-dimensional heat transfer phenomenon during the heat supply of a catalytic-wall reactor, which is designed for hydrogen production. The studied reactor is inserted in an electric furnace where the heat requirement of the endothermic reaction is supplied by electric heating system. During the heat supply, an unknown heat flux density, received by the reactive flow, is estimated using inverse methods. In the basis of the catalytic-wall reactor model, an experimental setup is engineered in situ to measure the temperature distribution. Then after, the measurements are injected in the numerical heat flux estimation procedure, which is based on the Function Specification Method (FSM). The measured and estimated temperatures are confronted and the heat flux density which crosses the reactor wall is determined.
A fast and objective multidimensional kernel density estimation method: fastKDE
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...
2016-03-07
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
NASA Astrophysics Data System (ADS)
Davidzon, I.; Cucciati, O.; Bolzonella, M.; De Lucia, G.; Zamorani, G.; Arnouts, S.; Moutard, T.; Ilbert, O.; Garilli, B.; Scodeggio, M.; Guzzo, L.; Abbas, U.; Adami, C.; Bel, J.; Bottini, D.; Branchini, E.; Cappi, A.; Coupon, J.; de la Torre, S.; Di Porto, C.; Fritz, A.; Franzetti, P.; Fumana, M.; Granett, B. R.; Guennou, L.; Iovino, A.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marulli, F.; McCracken, H. J.; Mellier, Y.; Moscardini, L.; Polletta, M.; Pollo, A.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.
2016-02-01
We exploit the first public data release of VIPERS to investigate environmental effects in the evolution of galaxies between z ~ 0.5 and 0.9. The large number of spectroscopic redshifts (more than 50 000) over an area of about 10 deg2 provides a galaxy sample with high statistical power. The accurate redshift measurements (σz = 0.00047(1 + zspec)) allow us to robustly isolate galaxies living in the lowest and highest density environments (δ< 0.7 and δ> 4, respectively) as defined in terms of spatial 3D density contrast δ. We estimate the stellar mass function of galaxies residing in these two environments and constrain the high-mass end (ℳ ≳ 1011 ℳ⊙) with unprecedented precision. We find that the galaxy stellar mass function in the densest regions has a different shape than was measured at low densities, with an enhancement of massive galaxies and a hint of a flatter (less negative) slope at z< 0.8. We normalise each mass function to the comoving volume occupied by the corresponding environment and relate estimates from different redshift bins. We observe an evolution of the stellar mass function of VIPERS galaxies in high densities, while the low-density one is nearly constant. We compare these results to semi-analytical models and find consistent environmental signatures in the simulated stellar mass functions. We discuss how the halo mass function and fraction of central/satellite galaxies depend on the environments considered, making intrinsic and environmental properties of galaxies physically coupled, hence difficult to disentangle. The evolution of our low-density regions is described well by the formalism introduced by Peng et al. (2010, ApJ, 721, 193), and is consistent with the idea that galaxies become progressively passive because of internal physical processes. The same formalism could also describe the evolution of the mass function in the high density regions, but only if a significant contribution from dry mergers is considered. Based on observations collected at the European Southern Observatory, Cerro Paranal, Chile, using the Very Large Telescope under programmes 182.A-0886 and partly 070.A-9007. Also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, M.; Bowman, B.; Branson, J.
The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
Hybrid reconstruction of quantum density matrix: when low-rank meets sparsity
NASA Astrophysics Data System (ADS)
Li, Kezhi; Zheng, Kai; Yang, Jingbei; Cong, Shuang; Liu, Xiaomei; Li, Zhaokai
2017-12-01
Both the mathematical theory and experiments have verified that the quantum state tomography based on compressive sensing is an efficient framework for the reconstruction of quantum density states. In recent physical experiments, we found that many unknown density matrices in which people are interested in are low-rank as well as sparse. Bearing this information in mind, in this paper we propose a reconstruction algorithm that combines the low-rank and the sparsity property of density matrices and further theoretically prove that the solution of the optimization function can be, and only be, the true density matrix satisfying the model with overwhelming probability, as long as a necessary number of measurements are allowed. The solver leverages the fixed-point equation technique in which a step-by-step strategy is developed by utilizing an extended soft threshold operator that copes with complex values. Numerical experiments of the density matrix estimation for real nuclear magnetic resonance devices reveal that the proposed method achieves a better accuracy compared to some existing methods. We believe that the proposed method could be leveraged as a generalized approach and widely implemented in the quantum state estimation.
Quang V. Cao; Shanna M. McCarty
2006-01-01
Diameter distributions in a forest stand have been successfully characterized by use of the Weibull function. Of special interest are cases where parameters of a Weibull distribution that models a future stand are predicted, either directly or indirectly, from current stand density and dominant height. This study evaluated four methods of predicting the Weibull...
Efficient Density Functional Approximation for Electronic Properties of Conjugated Systems
NASA Astrophysics Data System (ADS)
Caldas, Marília J.; Pinheiro, José Maximiano, Jr.; Blum, Volker; Rinke, Patrick
2014-03-01
There is on-going discussion about reliable prediction of electronic properties of conjugated oligomers and polymers, such as ionization potential IP and energy gap. Several exchange-correlation (XC) functionals are being used by the density functional theory community, with different success for different properties. In this work we follow a recent proposal: a fraction α of exact exchange is added to the semi-local PBE XC aiming consistency, for a given property, with the results obtained by many-body perturbation theory within the G0W0 approximation. We focus the IP, taken as the negative of the highest occupied molecular orbital energy. We choose α from a study of the prototype family trans-acetylene, and apply this same α to a set of oligomers for which there is experimental data available (acenes, phenylenes and others). Our results indicate we can have excellent estimates, within 0,2eV mean ave. dev. from the experimental values, better than through complete EN - 1 -EN calculations from the starting PBE functional. We also obtain good estimates for the electrical gap and orbital energies close to the band edge. Work supported by FAPESP, CNPq, and CAPES, Brazil, and DAAD, Germany.
NASA Astrophysics Data System (ADS)
Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; Brucken, E.; Devoto, F.
We search for resonant production of tt pairs in 4.8 fb{sup -1} integrated luminosity of pp collision data at {radical}(s)=1.96 TeV in the lepton+jets decay channel, where one top quark decays leptonically and the other hadronically. A matrix-element reconstruction technique is used; for each event a probability density function of the tt candidate invariant mass is sampled. These probability density functions are used to construct a likelihood function, whereby the cross section for resonant tt production is estimated, given a hypothetical resonance mass and width. The data indicate no evidence of resonant production of tt pairs. A benchmark model ofmore » leptophobic Z{sup '}{yields}tt is excluded with m{sub Z}{sup '}<900 GeV/c{sup 2} at 95% confidence level.« less
Using Density Functional Theory (DFT) for the Calculation of Atomization Energies
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Partridge, Harry; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
The calculation of atomization energies using density functional theory (DFT), using the B3LYP hybrid functional, is reported. The sensitivity of the atomization energy to basis set is studied and compared with the coupled cluster singles and doubles approach with a perturbational estimate of the triples (CCSD(T)). Merging the B3LYP results with the G2(MP2) approach is also considered. It is found that replacing the geometry optimization and calculation of the zero-point energy by the analogous quantities computed using the B3LYP approach reduces the maximum error in the G2(MP2) approach. In addition to the 55 G2 atomization energies, some results for transition metal containing systems will also be presented.
Uncertainty quantification and propagation in nuclear density functional theory
Schunck, N.; McDonnell, J. D.; Higdon, D.; ...
2015-12-23
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this study, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statisticalmore » analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.« less
C-5M Fuel Efficiency Through MFOQA Data Analysis
2015-03-26
deterioration of commercial high-bypass ratio turbofan engines. ( No. 801118).SAE Technical Paper. Mirtich, J. M. (2011). Cost index flying. (Unpublished...D. L. (2010). Constrained kalman filtering via density function truncation for turbofan engine health estimation. International Journal of Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devaraj, Arun; Prabhakaran, Ramprashad; Joshi, Vineet V.
2016-04-12
The purpose of this document is to provide a theoretical framework for (1) estimating uranium carbide (UC) volume fraction in a final alloy of uranium with 10 weight percent molybdenum (U-10Mo) as a function of final alloy carbon concentration, and (2) estimating effective 235U enrichment in the U-10Mo matrix after accounting for loss of 235U in forming UC. This report will also serve as a theoretical baseline for effective density of as-cast low-enriched U-10Mo alloy. Therefore, this report will serve as the baseline for quality control of final alloy carbon content
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
The heat of formation of gaseous PuO(2)2+ from relativistic density functional calculations.
Moskaleva, Lyudmila V; Matveev, Alexei V; Dengler, Joachim; Rösch, Notker
2006-08-28
Using a set of model reactions, we estimated the heat of formation of gaseous PuO2(2+) from quantum-chemical reaction enthalpies and experimental heats of formation of reference species. To this end, we carried out relativistic density functional calculations on the molecules PuO(2)2+, PuO2, PuF6, and PuF4. We used a revised variant (PBEN) of the Perdew-Burke-Ernzerhof gradient-corrected exchange-correlation functional, and we accounted for spin-orbit interaction in a self-consistent fashion. As open-shell Pu species with two or more unpaired 5f electrons are involved, spin-orbit interaction significantly affects the energies of the model reactions. Our theoretical estimate for the heat of formation DeltafH degree 0(PuO2(2+),g), 418+/-15 kcal mol-1, evaluated using plutonium fluorides as references, is in good agreement with a recent experimental result, 413+/-16 kcal mol-1. The theoretical value connected to the experimental heat of formation of PuO2(g) has a notably higher uncertainty and therefore was not included in the final result.
The Most Massive Galaxies and Black Holes Allowed by ΛCDM
NASA Astrophysics Data System (ADS)
Behroozi, Peter; Silk, Joseph
2018-04-01
Given a galaxy's stellar mass, its host halo mass has a lower limit from the cosmic baryon fraction and known baryonic physics. At z > 4, galaxy stellar mass functions place lower limits on halo number densities that approach expected ΛCDM halo mass functions. High-redshift galaxy stellar mass functions can thus place interesting limits on number densities of massive haloes, which are otherwise very difficult to measure. Although halo mass functions at z < 8 are consistent with observed galaxy stellar masses if galaxy baryonic conversion efficiencies increase with redshift, JWST and WFIRST will more than double the redshift range over which useful constraints are available. We calculate maximum galaxy stellar masses as a function of redshift given expected halo number densities from ΛCDM. We apply similar arguments to black holes. If their virial mass estimates are accurate, number density constraints alone suggest that the quasars SDSS J1044-0125 and SDSS J010013.02+280225.8 likely have black hole mass — stellar mass ratios higher than the median z = 0 relation, confirming the expectation from Lauer bias. Finally, we present a public code to evaluate the probability of an apparently ΛCDM-inconsistent high-mass halo being detected given the combined effects of multiple surveys and observational errors.
Ye, Xin; Pendyala, Ram M.; Zou, Yajie
2017-01-01
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152
Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie
2017-01-01
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.
Non-Fickian dispersion of groundwater age
Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.
2014-01-01
We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651
Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.
Korth, Martin
2013-10-14
The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic.
Bose Condensation at He-4 Interfaces
NASA Technical Reports Server (NTRS)
Draeger, E. W.; Ceperley, D. M.
2003-01-01
Path Integral Monte Carlo was used to calculate the Bose-Einstein condensate fraction at the surface of a helium film at T = 0:77 K, as a function of density. Moving from the center of the slab to the surface, the condensate fraction was found to initially increase with decreasing density to a maximum value of 0.9, before decreasing. Long wavelength density correlations were observed in the static structure factor at the surface of the slab. A surface dispersion relation was calculated from imaginary-time density-density correlations. Similar calculations of the superfluid density throughout He-4 droplets doped with linear impurities (HCN)(sub n) are presented. After deriving a local estimator for the superfluid density distribution, we find a decreased superfluid response in the first solvation layer. This effective normal fluid exhibits temperature dependence similar to that of a two-dimensional helium system.
Correlation techniques and measurements of wave-height statistics
NASA Technical Reports Server (NTRS)
Guthart, H.; Taylor, W. C.; Graf, K. A.; Douglas, D. G.
1972-01-01
Statistical measurements of wave height fluctuations have been made in a wind wave tank. The power spectral density function of temporal wave height fluctuations evidenced second-harmonic components and an f to the minus 5th power law decay beyond the second harmonic. The observations of second harmonic effects agreed very well with a theoretical prediction. From the wave statistics, surface drift currents were inferred and compared to experimental measurements with satisfactory agreement. Measurements were made of the two dimensional correlation coefficient at 15 deg increments in angle with respect to the wind vector. An estimate of the two-dimensional spatial power spectral density function was also made.
A strategy to unveil transient sources of ultra-high-energy cosmic rays
NASA Astrophysics Data System (ADS)
Takami, Hajime
2013-06-01
Transient generation of ultra-high-energy cosmic rays (UHECRs) has been motivated from promising candidates of UHECR sources such as gamma-ray bursts, flares of active galactic nuclei, and newly born neutron stars and magnetars. Here we propose a strategy to unveil transient sources of UHECRs from UHECR experiments. We demonstrate that the rate of UHECR bursts and/or flares is related to the apparent number density of UHECR sources, which is the number density estimated on the assumption of steady sources, and the time-profile spread of the bursts produced by cosmic magnetic fields. The apparent number density strongly depends on UHECR energies under a given rate of the bursts, which becomes observational evidence of transient sources. It is saturated at the number density of host galaxies of UHECR sources. We also derive constraints on the UHECR burst rate and/or energy budget of UHECRs per source as a function of the apparent source number density by using models of cosmic magnetic fields. In order to obtain a precise constraint of the UHECR burst rate, high event statistics above ˜ 1020 eV for evaluating the apparent source number density at the highest energies and better knowledge on cosmic magnetic fields by future observations and/or simulations to better estimate the time-profile spread of UHECR bursts are required. The estimated rate allows us to constrain transient UHECR sources by being compared with the occurrence rates of known energetic transient phenomena.
NASA Astrophysics Data System (ADS)
Delibalta, M. S.; Kahraman, S.; Comakli, R.
2015-11-01
Because the indirect tests are easier and cheaper than the direct tests, the prediction of rock properties from the indirect testing methods is important especially for the preliminary investigations. In this study, the predictability of the physico-mechanical rock properties from the noise level measured during cutting rock with diamond saw was investigated. Noise measurement test, uniaxial compressive strength (UCS) test, Brazilian tensile strength (BTS) test, point load strength (Is) test, density test, and porosity test were carried out on 54 different rock types in the laboratory. The results were statistically analyzed to derive estimation equations. Strong correlations between the noise level and the mechanical rock properties were found. The relations follow power functions. Increasing rock strength increases the noise level. Density and porosity also correlated strongly with the noise level. The relations follow linear functions. Increasing density increases the noise level while increasing porosity decreases the noise level. The developed equations are valid for the rocks with a compressive strength below 150 MPa. Concluding remark is that the physico-mechanical rock properties can reliably be estimated from the noise level measured during cutting the rock with diamond saw.
The pointwise estimates of diffusion wave of the compressible micropolar fluids
NASA Astrophysics Data System (ADS)
Wu, Zhigang; Wang, Weike
2018-09-01
The pointwise estimates for the compressible micropolar fluids in dimension three are given, which exhibit generalized Huygens' principle for the fluid density and fluid momentum as the compressible Navier-Stokes equation, while the micro-rational momentum behaves like the fluid momentum of the Euler equation with damping. To circumvent the complexity from 7 × 7 Green's matrix, we use the decomposition of fluid part and electromagnetic part for the momentums to study three smaller Green's matrices. The following from this decomposition is that we have to deal with the new problem that the nonlinear terms contain nonlocal operators. We solve it by using the natural match of these new Green's functions and the nonlinear terms. Moreover, to derive the different pointwise estimates for different unknown variables such that the estimate of each unknown variable is in agreement with its Green's function, we develop some new estimates on the nonlinear interplay between different waves.
Galaxy–galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; ...
2017-07-21
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less
Galaxy–galaxy lensing estimators and their covariance properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less
On the estimation of the current density in space plasmas: Multi- versus single-point techniques
NASA Astrophysics Data System (ADS)
Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco
2017-06-01
Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrzanowski, J.; Xing, W.B.; Atlan, D.
1994-12-31
Correlations between critical current density (j{sub c}) critical temperature (T{sub c}) and the density of edge dislocations and nonuniform strain have been observed in YBCO thin films deposited by pulsed laser ablation on (001) LaAlO{sub 3} single crystals. Distinct maxima in j{sub c} as a function of the linewidths of the (00{ell}) Bragg reflections and as a function of the mosaic spread have been found in the epitaxial films. These maxima in j{sub c} indicate that the magnetic flux lines, in films of structural quality approaching that of single crystals, are insufficiently pinned which results in a decreased critical currentmore » density. T{sub c} increased monotonically with improving crystalline quality and approached a value characteristic of a pure single crystal. A strong correlation between j{sub c} and the density of edge dislocations N{sub D} was found. At the maximum of the critical current density the density of edge dislocations was estimated to be N{sub D}{approximately}1-2 x 10{sup 9}/cm{sup 2}.« less
NASA Technical Reports Server (NTRS)
Chrzanowski, J.; Xing, W. B.; Atlan, D.; Irwin, J. C.; Heinrich, B.; Cragg, R. A.; Zhou, H.; Angus, V.; Habib, F.; Fife, A. A.
1995-01-01
Correlations between critical current density (j(sub c)) critical temperature (T(sub c)) and the density of edge dislocations and nonuniform strain have been observed in YBCO thin films deposited by pulsed laser ablation on (001) LaAlO3 single crystals. Distinct maxima in j(sub c) as a function of the linewidths of the (00 l) Bragg reflections and as a function of the mosaic spread have been found in the epitaxial films. These maxima in j(sub c) indicate that the magnetic flux lines, in films of structural quality approachingthat of single crystals, are insufficiently pinned which results in a decreased critical current density. T(sub c) increased monotonically with improving crystalline quality and approached a value characteristic of a pure single crystal. A strong correlation between j(sub c) and the density of edge dislocations ND was found. At the maximum of the critical current density the density of edge dislocations was estimated to be N(sub D) approximately 1-2 x 10(exp 9)/sq cm.
Living on the edge: roe deer (Capreolus capreolus) density in the margins of its geographical range.
Valente, Ana M; Fonseca, Carlos; Marques, Tiago A; Santos, João P; Rodrigues, Rogério; Torres, Rita Tinoco
2014-01-01
Over the last decades roe deer (Capreolus capreolus) populations have increased in number and distribution throughout Europe. Such increases have profound impacts on ecosystems, both positive and negative. Therefore monitoring roe deer populations is essential for the appropriate management of this species, in order to achieve a balance between conservation and mitigation of the negative impacts. Despite being required for an effective management plan, the study of roe deer ecology in Portugal is at an early stage, and hence there is still a complete lack of knowledge of roe deer density within its known range. Distance sampling of pellet groups coupled with production and decay rates for pellet groups provided density estimates for roe deer in northeastern Portugal (Lombada National Hunting Area--LNHA, Serra de Montesinho--SM and Serra da Nogueira--SN; LNHA and SM located in Montesinho Natural Park). The estimated roe deer density using a stratified detection function was 1.23/100 ha for LNHA, 4.87/100 ha for SM and 4.25/100 ha in SN, with 95% confidence intervals (CI) of 0.68 to 2.21, 3.08 to 7.71 and 2.25 to 8.03, respectively. For the entire area, the estimated density was about 3.51/100 ha (95% CI - 2.26-5.45). This method can provide estimates of roe deer density, which will ultimately support management decisions. However, effective monitoring should be based on long-term studies that are able to detect population fluctuations. This study represents the initial phase of roe deer monitoring at the edge of its European range and intends to fill the gap in this species ecology, as the gathering of similar data over a number of years will provide the basis for stronger inferences. Monitoring should be continued, although the study area should be increased to evaluate the accuracy of estimates and assess the impact of management actions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, Ludovic, E-mail: ludohumberto@gmail.com; Hazrati Marangalou, Javad; Rietbergen, Bert van
Purpose: Cortical thickness and density are critical components in determining the strength of bony structures. Computed tomography (CT) is one possible modality for analyzing the cortex in 3D. In this paper, a model-based approach for measuring the cortical bone thickness and density from clinical CT images is proposed. Methods: Density variations across the cortex were modeled as a function of the cortical thickness and density, location of the cortex, density of surrounding tissues, and imaging blur. High resolution micro-CT data of cadaver proximal femurs were analyzed to determine a relationship between cortical thickness and density. This thickness-density relationship was usedmore » as prior information to be incorporated in the model to obtain accurate measurements of cortical thickness and density from clinical CT volumes. The method was validated using micro-CT scans of 23 cadaver proximal femurs. Simulated clinical CT images with different voxel sizes were generated from the micro-CT data. Cortical thickness and density were estimated from the simulated images using the proposed method and compared with measurements obtained using the micro-CT images to evaluate the effect of voxel size on the accuracy of the method. Then, 19 of the 23 specimens were imaged using a clinical CT scanner. Cortical thickness and density were estimated from the clinical CT images using the proposed method and compared with the micro-CT measurements. Finally, a case-control study including 20 patients with osteoporosis and 20 age-matched controls with normal bone density was performed to evaluate the proposed method in a clinical context. Results: Cortical thickness (density) estimation errors were 0.07 ± 0.19 mm (−18 ± 92 mg/cm{sup 3}) using the simulated clinical CT volumes with the smallest voxel size (0.33 × 0.33 × 0.5 mm{sup 3}), and 0.10 ± 0.24 mm (−10 ± 115 mg/cm{sup 3}) using the volumes with the largest voxel size (1.0 × 1.0 × 3.0 mm{sup 3}). A trend for the cortical thickness and density estimation errors to increase with voxel size was observed and was more pronounced for thin cortices. Using clinical CT data for 19 of the 23 samples, mean errors of 0.18 ± 0.24 mm for the cortical thickness and 15 ± 106 mg/cm{sup 3} for the density were found. The case-control study showed that osteoporotic patients had a thinner cortex and a lower cortical density, with average differences of −0.8 mm and −58.6 mg/cm{sup 3} at the proximal femur in comparison with age-matched controls (p-value < 0.001). Conclusions: This method might be a promising approach for the quantification of cortical bone thickness and density using clinical routine imaging techniques. Future work will concentrate on investigating how this approach can improve the estimation of mechanical strength of bony structures, the prevention of fracture, and the management of osteoporosis.« less
THE EVOLUTION OF EARLY- AND LATE-TYPE GALAXIES IN THE COSMIC EVOLUTION SURVEY UP TO z {approx} 1.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pannella, Maurilio; Gabasch, Armin; Drory, Niv
2009-08-10
The Cosmic Evolution Survey (COSMOS) allows for the first time a highly significant census of environments and structures up to redshift 1, as well as a full morphological description of the galaxy population. In this paper we present a study aimed to constrain the evolution, in the redshift range 0.2 < z < 1.2, of the mass content of different morphological types and its dependence on the environmental density. We use a deep multicolor catalog, covering an area of {approx}0.7 deg{sup 2} inside the COSMOS field, with accurate photometric redshifts (i {approx}< 26.5 and {delta}z/(z {sub spec} + 1) {approx}more » 0.035). We estimate galaxy stellar masses by fitting the multicolor photometry to a grid of composite stellar population models. We quantitatively describe the galaxy morphology by fitting point-spread function convolved Sersic profiles to the galaxy surface brightness distributions down to F814 = 24 mag for a sample of 41,300 objects. We confirm an evolution of the morphological mix with redshift: the higher the redshift the more disk-dominated galaxies become important. We find that the morphological mix is a function of the local comoving density: the morphology density relation extends up to the highest redshift explored. The stellar mass function of disk-dominated galaxies is consistent with being constant with redshift. Conversely, the stellar mass function of bulge-dominated systems shows a decline in normalization with redshift. Such different behaviors of late-types and early-types stellar mass functions naturally set the redshift evolution of the transition mass. We find a population of relatively massive, early-type galaxies, having high specific star formation rate (SSFR) and blue colors which live preferentially in low-density environments. The bulk of massive (>7 x 10{sup 10} M {sub sun}) early-type galaxies have similar characteristic ages, colors, and SSFRs independently of the environment they belong to, with those hosting the oldest stars in the universe preferentially belonging to the highest density regions. The whole catalog including morphological information and stellar mass estimates analyzed in this work is made publicly available.« less
Estimation of kinetic parameters from list-mode data using an indirect apporach
NASA Astrophysics Data System (ADS)
Ortiz, Joseph Christian
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
Density functional study of molecular interactions in secondary structures of proteins.
Takano, Yu; Kusaka, Ayumi; Nakamura, Haruki
2016-01-01
Proteins play diverse and vital roles in biology, which are dominated by their three-dimensional structures. The three-dimensional structure of a protein determines its functions and chemical properties. Protein secondary structures, including α-helices and β-sheets, are key components of the protein architecture. Molecular interactions, in particular hydrogen bonds, play significant roles in the formation of protein secondary structures. Precise and quantitative estimations of these interactions are required to understand the principles underlying the formation of three-dimensional protein structures. In the present study, we have investigated the molecular interactions in α-helices and β-sheets, using ab initio wave function-based methods, the Hartree-Fock method (HF) and the second-order Møller-Plesset perturbation theory (MP2), density functional theory, and molecular mechanics. The characteristic interactions essential for forming the secondary structures are discussed quantitatively.
Estimation of density of mongooses with capture-recapture and distance sampling
Corn, J.L.; Conroy, M.J.
1998-01-01
We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.
Han, Jeong-Hwan; Oda, Takuji
2018-04-14
The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.
NASA Astrophysics Data System (ADS)
Han, Jeong-Hwan; Oda, Takuji
2018-04-01
The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.
Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials
Carleton, James B.; D’Amore, Antonio; Feaver, Kristen R.; ...
2014-10-13
Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. This paper addresses these issues in two ways. First, using methods of geometric probability, we develop theoretical estimates for the mean linear and areal fiber intersection densities for 2-D fibrous networks. These densities are expressed in terms of the fiber densitymore » and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of 2-D fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of scanning electron microscope images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. Finally, the methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data.« less
Can Sgr A* flares reveal the molecular gas density PDF?
NASA Astrophysics Data System (ADS)
Churazov, E.; Khabibullin, I.; Sunyaev, R.; Ponti, G.
2017-11-01
Illumination of dense gas in the Central Molecular Zone by powerful X-ray flares from Sgr A* leads to prominent structures in the reflected emission that can be observed long after the end of the flare. By studying this emission, we learn about past activity of the supermassive black hole in our Galactic Center and, at the same time, we obtain unique information on the structure of molecular clouds that is essentially impossible to get by other means. Here we discuss how X-ray data can improve our knowledge of both sides of the problem. Existing data already provide (I) an estimate of the flare age, (II) a model-independent lower limit on the luminosity of Sgr A* during the flare and (III) an estimate of the total emitted energy during Sgr A* flare. On the molecular clouds side, the data clearly show a voids-and-walls structure of the clouds and can provide an almost unbiased probe of the mass/density distribution of the molecular gas with the hydrogen column densities lower than few 1023 cm-2. For instance, the probability distribution function of the gas density PDF(ρ) can be measured this way. Future high energy resolution X-ray missions will provide the information on the gas velocities, allowing, for example, a reconstruction of the velocity field structure functions and cross-matching the X-ray and molecular data based on positions and velocities.
Characterizing fishing effort and spatial extent of coastal fisheries.
Stewart, Kelly R; Lewison, Rebecca L; Dunn, Daniel C; Bjorkland, Rhema H; Kelez, Shaleyla; Halpin, Patrick N; Crowder, Larry B
2010-12-29
Biodiverse coastal zones are often areas of intense fishing pressure due to the high relative density of fishing capacity in these nearshore regions. Although overcapacity is one of the central challenges to fisheries sustainability in coastal zones, accurate estimates of fishing pressure in coastal zones are limited, hampering the assessment of the direct and collateral impacts (e.g., habitat degradation, bycatch) of fishing. We compiled a comprehensive database of fishing effort metrics and the corresponding spatial limits of fisheries and used a spatial analysis program (FEET) to map fishing effort density (measured as boat-meters per km²) in the coastal zones of six ocean regions. We also considered the utility of a number of socioeconomic variables as indicators of fishing pressure at the national level; fishing density increased as a function of population size and decreased as a function of coastline length. Our mapping exercise points to intra and interregional 'hotspots' of coastal fishing pressure. The significant and intuitive relationships we found between fishing density and population size and coastline length may help with coarse regional characterizations of fishing pressure. However, spatially-delimited fishing effort data are needed to accurately map fishing hotspots, i.e., areas of intense fishing activity. We suggest that estimates of fishing effort, not just target catch or yield, serve as a necessary measure of fishing activity, which is a key link to evaluating sustainability and environmental impacts of coastal fisheries.
Estimating black bear density using DNA data from hair snares
Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.
2010-01-01
DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.
Fault Detection of Rotating Machinery using the Spectral Distribution Function
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1997-01-01
The spectral distribution function is introduced to characterize the process leading to faults in rotating machinery. It is shown to be a more robust indicator than conventional power spectral density estimates, but requires only slightly more computational effort. The method is illustrated with examples from seeded gearbox transmission faults and an analytical model of a defective bearing. Procedures are suggested for implementation in realistic environments.
NASA Astrophysics Data System (ADS)
Makovníková, Jarmila; Širáň, Miloš; Houšková, Beata; Pálka, Boris; Jones, Arwyn
2017-10-01
Soil bulk density is one of the main direct indicators of soil health, and is an important aspect of models for determining agroecosystem services potential. By way of applying multi-regression methods, we have created a distributed prediction of soil bulk density used subsequently for topsoil carbon stock estimation. The soil data used for this study were from the Slovakian partial monitoring system-soil database. In our work, two models of soil bulk density in an equilibrium state, with different combinations of input parameters (soil particle size distribution and soil organic carbon content in %), have been created, and subsequently validated using a data set from 15 principal sampling sites of Slovakian partial monitoring system-soil, that were different from those used to generate the bulk density equations. We have made a comparison of measured bulk density data and data calculated by the pedotransfer equations against soil bulk density calculated according to equations recommended by Joint Research Centre Sustainable Resources for Europe. The differences between measured soil bulk density and the model values vary from -0.144 to 0.135 g cm-3 in the verification data set. Furthermore, all models based on pedotransfer functions give moderately lower values. The soil bulk density model was then applied to generate a first approximation of soil bulk density map for Slovakia using texture information from 17 523 sampling sites, and was subsequently utilised for topsoil organic carbon estimation.
González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría
2017-01-01
The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.
Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi
2015-01-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639
Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G
2016-12-01
The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.
Ellipsoids for anomaly detection in remote sensing imagery
NASA Astrophysics Data System (ADS)
Grosklos, Guenchik; Theiler, James
2015-05-01
For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.
NASA Astrophysics Data System (ADS)
Sahni, V.; Ma, C. Q.
1980-12-01
The inhomogeneous electron gas at a jellium metal surface is studied in the Hartree-Fock approximation by Kohn-Sham density functional theory. Rigorous upper bounds to the surface energy are derived by application of the Rayleigh-Ritz variational principle for the energy, the surface kinetic, electrostatic, and nonlocal exchange energy functionals being determined exactly for the accurate linear-potential model electronic wave functions. The densities obtained by the energy minimization constraint are then employed to determine work-function results via the variationally accurate "displaced-profile change-in-self-consistent-field" expression. The theoretical basis of this non-self-consistent procedure and its demonstrated accuracy for the fully correlated system (as treated within the local-density approximation for exchange and correlation) leads us to conclude these results for the surface energies and work functions to be essentially exact. Work-function values are also determined by the Koopmans'-theorem expression, both for these densities as well as for those obtained by satisfaction of the constraint set on the electrostatic potential by the Budd-Vannimenus theorem. The use of the Hartree-Fock results in the accurate estimation of correlation-effect contributions to these surface properties of the nonuniform electron gas is also indicated. In addition, the original work and approximations made by Bardeen in this attempt at a solution of the Hartree-Fock problem are briefly reviewed in order to contrast with the present work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutta, S.; Saha, J. K.; Chandra, R.
The Rayleigh-Ritz variational technique with a Hylleraas basis set is being tested for the first time to estimate the structural modifications of a lithium atom embedded in a weakly coupled plasma environment. The Debye-Huckel potential is used to mimic the weakly coupled plasma environment. The wave functions for both the helium-like lithium ion and the lithium atom are expanded in the explicitly correlated Hylleraas type basis set which fully takes care of the electron-electron correlation effect. Due to the continuum lowering under plasma environment, the ionization potential of the system gradually decreases leading to the destabilization of the atom. Themore » excited states destabilize at a lower value of the plasma density. The estimated ionization potential agrees fairly well with the few available theoretical estimates. The variation of one and two particle moments, dielectric susceptibility and magnetic shielding constant, with respect to plasma density is also been discussed in detail.« less
NASA Technical Reports Server (NTRS)
Conel, James E.; Hoover, Gordon; Nolin, Anne; Alley, Ron; Margolis, Jack
1992-01-01
Empirical relationships between variables are ways of securing estimates of quantities difficult to measure by remote sensing methods. The use of empirical functions was explored between: (1) atmospheric column moisture abundance W (gm H2O/cm(sup 2) and surface absolute water vapor density rho(q-bar) (gm H2O/cm(sup 3), with rho density of moist air (gm/cm(sup 3), q-bar specific humidity (gm H2O/gm moist air), and (2) column abundance and surface moisture flux E (gm H2O/(cm(sup 2)sec)) to infer regional evapotranspiration from Airborne Visible/Infrared Imaging Spectrometers (AVIRIS) water vapor mapping data. AVIRIS provides, via analysis of atmospheric water absorption features, estimates of column moisture abundance at very high mapping rate (at approximately 100 km(sup 2)/40 sec) over large areas at 20 m ground resolution.
Under-sampling trajectory design for compressed sensing based DCE-MRI.
Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting
2013-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.
Adsorption of the astatine species on a gold surface: A relativistic density functional theory study
NASA Astrophysics Data System (ADS)
Demidov, Yuriy; Zaitsevskii, Andréi
2018-01-01
We report first-principle based studies of the adsorption interaction of astatine species on a gold surface. These studies are aimed primarily at the support and interpretation of gas chromatographic experiments with superheavy elements, tennessine (Ts, Z = 117), a heavier homologue of At, and possibly its pseudo-homologue nihonium (Nh, Z = 113). We use gold clusters with up to 69 atoms to simulate the adsorption sites and estimate the desorption energies of At & AtOH from a stable gold (1 1 1) surface. To describe the electronic structure of At -Aun and AtOH -Aun complexes, we combine accurate shape-consistent relativistic pseudopotentials and non-collinear two-component relativistic density functional theory. The predicted desorption energies of At and AtOH on gold are 130 ± 10 kJ/mol and 90 ± 10 kJ/mol, respectively. These results confirm the validity of the estimates derived from chromatographic data (147 ± 15 kJ/mol for At, and 100-10+20 kJ/mol for AtOH).
Ecosystem-scale plant hydraulic strategies inferred from remotely-sensed soil moisture
NASA Astrophysics Data System (ADS)
Bassiouni, M.; Good, S. P.; Higgins, C. W.
2017-12-01
Characterizing plant hydraulic strategies at the ecosystem scale is important to improve estimates of evapotranspiration and to understand ecosystem productivity and resilience. However, quantifying plant hydraulic traits beyond the species level is a challenge. The probability density function of soil moisture observations provides key information about the soil moisture states at which evapotranspiration is reduced by water stress. Here, an inverse Bayesian approach is applied to a standard bucket model of soil column hydrology forced with stochastic precipitation inputs. Through this approach, we are able to determine the soil moisture thresholds at which stomata are open or closed that are most consistent with observed soil moisture probability density functions. This research utilizes remotely-sensed soil moisture data to explore global patterns of ecosystem-scale plant hydraulic strategies. Results are complementary to literature values of measured hydraulic traits of various species in different climates and previous estimates of ecosystem-scale plant isohydricity. The presented approach provides a novel relation between plant physiological behavior and soil-water dynamics.
Density Estimation for New Solid and Liquid Explosives
1977-02-17
The group additivity approach was shown to be applicable to density estimation. The densities of approximately 180 explosives and related compounds... of very diverse compositions were estimated, and almost all the estimates were quite reasonable. Of the 168 compounds for which direct comparisons...could be made (see Table 6), 36.9% of the estimated densities were within 1% of the measured densities, 33.3% were within 1-2%, 11.9% were within 2-3
Longitudinal Differences of Ionospheric Vertical Density Distribution and Equatorial Electrodynamics
NASA Technical Reports Server (NTRS)
Yizengaw, E.; Zesta, E.; Moldwin, M. B.; Damtie, B.; Mebrahtu, A.; Valledares, C.E.; Pfaff, R. F.
2012-01-01
Accurate estimation of global vertical distribution of ionospheric and plasmaspheric density as a function of local time, season, and magnetic activity is required to improve the operation of space-based navigation and communication systems. The vertical density distribution, especially at low and equatorial latitudes, is governed by the equatorial electrodynamics that produces a vertical driving force. The vertical structure of the equatorial density distribution can be observed by using tomographic reconstruction techniques on ground-based global positioning system (GPS) total electron content (TEC). Similarly, the vertical drift, which is one of the driving mechanisms that govern equatorial electrodynamics and strongly affect the structure and dynamics of the ionosphere in the low/midlatitude region, can be estimated using ground magnetometer observations. We present tomographically reconstructed density distribution and the corresponding vertical drifts at two different longitudes: the East African and west South American sectors. Chains of GPS stations in the east African and west South American longitudinal sectors, covering the equatorial anomaly region of meridian approx. 37 deg and 290 deg E, respectively, are used to reconstruct the vertical density distribution. Similarly, magnetometer sites of African Meridian B-field Education and Research (AMBER) and INTERMAGNET for the east African sector and South American Meridional B-field Array (SAMBA) and Low Latitude Ionospheric Sensor Network (LISN) are used to estimate the vertical drift velocity at two distinct longitudes. The comparison between the reconstructed and Jicamarca Incoherent Scatter Radar (ISR) measured density profiles shows excellent agreement, demonstrating the usefulness of tomographic reconstruction technique in providing the vertical density distribution at different longitudes. Similarly, the comparison between magnetometer estimated vertical drift and other independent drift observation, such as from VEFI onboard Communication/Navigation Outage Forecasting System (C/NOFS) satellite and JULIA radar, is equally promising. The observations at different longitudes suggest that the vertical drift velocities and the vertical density distribution have significant longitudinal differences; especially the equatorial anomaly peaks expand to higher latitudes more in American sector than the African sector, indicating that the vertical drift in the American sector is stronger than the African sector.
NASA Astrophysics Data System (ADS)
Elsner, F.; Feulner, G.; Hopp, U.
2008-01-01
Aims:We estimate stellar masses of galaxies in the high redshift universe with the intention of determining the influence of newly available Spitzer/IRAC infrared data on the analysis. Based on the results, we probe the mass assembly history of the universe. Methods: We use the GOODS-MUSIC catalog, which provides multiband photometry from the U-filter to the 8 μm Spitzer band for almost 15 000 galaxies with either spectroscopic (for ≈7% of the sample) or photometric redshifts, and apply a standard model fitting technique to estimate stellar masses. We than repeat our calculations with fixed photometric redshifts excluding Spitzer photometry and directly compare the outcomes to look for systematic deviations. Finally we use our results to compute stellar mass functions and mass densities up to redshift z = 5. Results: We find that stellar masses tend to be overestimated on average if further constraining Spitzer data are not included into the analysis. Whilst this trend is small up to intermediate redshifts z ⪉ 2.5 and falls within the typical error in mass, the deviation increases strongly for higher redshifts and reaches a maximum of a factor of three at redshift z ≈ 3.5. Thus, up to intermediate redshifts, results for stellar mass density are in good agreement with values taken from literature calculated without additional Spitzer photometry. At higher redshifts, however, we find a systematic trend towards lower mass densities if Spitzer/IRAC data are included.
Shokuhfar, Ali; Arab, Behrouz
2013-09-01
Recently, great attention has been focused on using epoxy polymers in different fields such as aerospace, automotive, biotechnology, and electronics, owing to their superior properties. In this study, the classical molecular dynamics (MD) was used to simulate the cross linking of diglycidyl ether of bisphenol-A (DGEBA) with diethylenetriamine (DETA) curing agent, and to study the behavior of resulted epoxy polymer with different conversion rates. The constant-strain (static) approach was then applied to calculate the mechanical properties (Bulk, shear and Young's moduli, elastic stiffness constants, and Poisson's ratio) of the uncured and cross-linked systems. Estimated material properties were found to be in good agreement with experimental observations. Moreover, the dependency of mechanical properties on the cross linking density was investigated and revealed improvements in the mechanical properties with increasing the cross linking density. The radial distribution function (RDF) was also used to study the evolution of local structures of the simulated systems as a function of cross linking density.
Metal-to-insulator transition induced by UV illumination in a single SnO2 nanobelt
NASA Astrophysics Data System (ADS)
Viana, E. R.; Ribeiro, G. M.; de Oliveira, A. G.; González, J. C.
2017-11-01
An individual tin oxide (SnO2) nanobelt was connected in a back-gate field-effect transistor configuration and the conductivity of the nanobelt was measured at different temperatures from 400 K to 4 K, in darkness and under UV illumination. In darkness, the SnO2 nanobelts showed semiconductor behavior for the whole temperature range measured. However, when subjected to UV illumination the photoinduced carriers were high enough to lead to a metal-to-insulator transition (MIT), near room temperature, at T MIT = 240 K. By measuring the current versus gate voltage curves, and considering the electrostatic properties of a non-ideal conductor, for the SnO2 nanobelt on top of a gate-oxide substrate, we estimated the capacitance per unit length, the mobility and the density of carriers. In darkness, the density was estimated to be 5-10 × 1018 cm-3, in agreement with our previously reported result (Phys. Status Solid. RRL 6, 262-4 (2012)). However, under UV illumination the density of carriers was estimated to be 0.2-3.8 × 1019 cm-3 near T MIT, which exceeded the critical Mott density estimated to be 2.8 × 1019 cm-3 above 240 K. These results showed that the electrical properties of the SnO2 nanobelts can be drastically modified and easily tuned from semiconducting to metallic states as a function of temperature and light.
NASA Astrophysics Data System (ADS)
Soom, F.; Ulrich, C.; Dafflon, B.; Wu, Y.; Kneafsey, T. J.; López, R. D.; Peterson, J.; Hubbard, S. S.
2016-12-01
The Arctic tundra with its permafrost dominated soils is one of the regions most affected by global climate change, and in turn, can also influence the changing climate through biogeochemical processes, including greenhouse gas release or storage. Characterization of shallow permafrost distribution and characteristics are required for predicting ecosystem feedbacks to a changing climate over decadal to century timescales, because they can drive active layer deepening and land surface deformation, which in turn can significantly affect hydrological and biogeochemical responses, including greenhouse gas dynamics. In this study, part of the Next-Generation Ecosystem Experiment (NGEE-Arctic), we use X-ray computed tomography (CT) to estimate wet bulk density of cores extracted from a field site near Barrow AK, which extend 2-3m through the active layer into the permafrost. We use multi-dimensional relationships inferred from destructive core sample analysis to infer organic matter density, dry bulk density and ice content, along with some geochemical properties from nondestructive CT-scans along the entire length of the cores, which was not obtained by the spatially limited destructive laboratory analysis. Multi-parameter cross-correlations showed good agreement between soil properties estimated from CT scans versus properties obtained through destructive sampling. Soil properties estimated from cores located in different types of polygons provide valuable information about the vertical distribution of soil and permafrost properties as a function of geomorphology.
Costa, Flávia R C; Lang, Carla; Almeida, Danilo R A; Castilho, Carolina V; Poorter, Lourens
2018-05-16
The linking of individual functional traits to ecosystem processes is the basis for making generalizations in ecology, but the measurement of individual values is laborious and time consuming, preventing large-scale trait mapping. Also, in hyper-diverse systems, errors occur because identification is difficult, and species level values ignore intra-specific variation. To allow extensive trait mapping at the individual level, we evaluated the potential of Fourrier-Transformed Near Infra-Red Spectrometry (FT-NIR) to adequately describe 14 traits that are key for plant carbon, water, and nutrient balance. FT-NIR absorption spectra (1,000-2,500 nm) were obtained from dry leaves and branches of 1,324 trees of 432 species from a hyper-diverse Amazonian forest. FT-NIR spectra were related to measured traits for the same plants using partial least squares regressions. A further 80 plants were collected from a different site to evaluate model applicability across sites. Relative prediction error (RMSE rel ) was calculated as the percentage of the trait value range represented by the final model RMSE. The key traits used in most functional trait studies; specific leaf area, leaf dry matter content, wood density and wood dry matter content can be well predicted by the model (R 2 = 0.69-0.78, RMSE rel = 9-11%), while leaf density, xylem proportion, bark density and bark dry matter content can be moderately well predicted (R 2 = 0.53-0.61, RMSE rel = 14-17%). Community-weighted means of all traits were well estimated with NIR, as did the shape of the frequency distribution of the community values for the above key traits. The model developed at the core site provided good estimations of the key traits of a different site. An evaluation of the sampling effort indicated that 400 or less individuals may be sufficient for establishing a good local model. We conclude that FT-NIR is an easy, fast and cheap method for the large-scale estimation of individual plant traits that was previously impossible. The ability to use dry intact leaves and branches unlocks the potential for using herbarium material to estimate functional traits; thus advancing our knowledge of community and ecosystem functioning from local to global scales. © 2018 by the Ecological Society of America.
THE LOCAL [C ii] 158 μ m EMISSION LINE LUMINOSITY FUNCTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemmati, Shoubaneh; Yan, Lin; Capak, Peter
We present, for the first time, the local [C ii] 158 μ m emission line luminosity function measured using a sample of more than 500 galaxies from the Revised Bright Galaxy Sample. [C ii] luminosities are measured from the Herschel PACS observations of the Luminous Infrared Galaxies (LIRGs) in the Great Observatories All-sky LIRG Survey and estimated for the rest of the sample based on the far-infrared (far-IR) luminosity and color. The sample covers 91.3% of the sky and is complete at S{sub 60μm} > 5.24 Jy. We calculate the completeness as a function of [C ii] line luminosity and distance, basedmore » on the far-IR color and flux densities. The [C ii] luminosity function is constrained in the range ∼10{sup 7–9} L{sub ⊙} from both the 1/ V{sub max} and a maximum likelihood methods. The shape of our derived [C ii] emission line luminosity function agrees well with the IR luminosity function. For the CO(1-0) and [C ii] luminosity functions to agree, we propose a varying ratio of [C ii]/CO(1-0) as a function of CO luminosity, with larger ratios for fainter CO luminosities. Limited [C ii] high-redshift observations as well as estimates based on the IR and UV luminosity functions are suggestive of an evolution in the [C ii] luminosity function similar to the evolution trend of the cosmic star formation rate density. Deep surveys using the Atacama Large Millimeter Array with full capability will be able to confirm this prediction.« less
NASA Astrophysics Data System (ADS)
Ko, Hsin-Yu; Santra, Biswajit; Distasio, Robert A., Jr.; Wu, Xifan; Car, Roberto
Hybrid functionals are known to alleviate the self-interaction error in density functional theory (DFT) and provide a more accurate description of the electronic structure of molecules and materials. However, hybrid DFT in the condensed-phase has a prohibitively high associated computational cost which limits their applicability to large systems of interest. In this work, we present a general-purpose order(N) implementation of hybrid DFT in the condensed-phase using Maximally localized Wannier function; this implementation is optimized for massively parallel computing architectures. This algorithm is used to perform large-scale ab initio molecular dynamics simulations of liquid water, ice, and aqueous ionic solutions. We have performed simulations in the isothermal-isobaric ensemble to quantify the effects of exact exchange on the equilibrium density properties of water at different thermodynamic conditions. We find that the anomalous density difference between ice I h and liquid water at ambient conditions as well as the enthalpy differences between ice I h, II, and III phases at the experimental triple point (238 K and 20 Kbar) are significantly improved using hybrid DFT over previous estimates using the lower rungs of DFT This work has been supported by the Department of Energy under Grants No. DE-FG02-05ER46201 and DE-SC0008626.
Thermodynamic Properties of HCFC142b
NASA Astrophysics Data System (ADS)
Fukushima, Masato; Watanabe, Naohiro
Thermodynamic properties of HCFC142b,namely saturated densities,vapor pressures and PVT properties,were measured and the critical parameters were determined through those experimental results. The correlations for vpor pressure, saturated liquid density and PVT properties deduced from those experimental results were compared with the measured data and also with the estimates of the other correlations published in literatures. The thermodynamic functions,such as enthalpy,entropy,heat capacity and etc.,could be considered to be reasonab1y estimatedby the expression reported in this paper.
Biologically Assembled Quantum Electronic Arrays
2013-06-07
characterizing the NP arrays. Theory of gate-tunable exchange coupling in the case of cobalt NP on graphene . Used Spin-density-functional theory and...polarization. We can estimate this field using the material parameters for Cobalt , which gives B neEo:N~ M;r; " T zrv M M "’ m s s Here N1 is the...minority spin density of states at the Fermi surface for Cobalt , M5 is its saturation magnetization, while M:x is the x-component of the magnetization
Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit
2009-01-01
Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.
Morris, Jonathan R; Vandermeer, John; Perfecto, Ivette
2015-01-01
Species' functional traits are an important part of the ecological complexity that determines the provisioning of ecosystem services. In biological pest control, predator response to pest density variation is a dynamic trait that impacts the provision of this service in agroecosystems. When pest populations fluctuate, farmers relying on biocontrol services need to know how natural enemies respond to these changes. Here we test the effect of variation in coffee berry borer (CBB) density on the biocontrol efficiency of a keystone ant species (Azteca sericeasur) in a coffee agroecosystem. We performed exclosure experiments to measure the infestation rate of CBB released on coffee branches in the presence and absence of ants at four different CBB density levels. We measured infestation rate as the number of CBB bored into fruits after 24 hours, quantified biocontrol efficiency (BCE) as the proportion of infesting CBB removed by ants, and estimated functional response from ant attack rates, measured as the difference in CBB infestation between branches. Infestation rates of CBB on branches with ants were significantly lower (71%-82%) than on those without ants across all density levels. Additionally, biocontrol efficiency was generally high and did not significantly vary across pest density treatments. Furthermore, ant attack rates increased linearly with increasing CBB density, suggesting a Type I functional response. These results demonstrate that ants can provide robust biological control of CBB, despite variation in pest density, and that the response of predators to pest density variation is an important factor in the provision of biocontrol services. Considering how natural enemies respond to changes in pest densities will allow for more accurate biocontrol predictions and better-informed management of this ecosystem service in agroecosystems.
Morris, Jonathan R.; Vandermeer, John; Perfecto, Ivette
2015-01-01
Species’ functional traits are an important part of the ecological complexity that determines the provisioning of ecosystem services. In biological pest control, predator response to pest density variation is a dynamic trait that impacts the provision of this service in agroecosystems. When pest populations fluctuate, farmers relying on biocontrol services need to know how natural enemies respond to these changes. Here we test the effect of variation in coffee berry borer (CBB) density on the biocontrol efficiency of a keystone ant species (Azteca sericeasur) in a coffee agroecosystem. We performed exclosure experiments to measure the infestation rate of CBB released on coffee branches in the presence and absence of ants at four different CBB density levels. We measured infestation rate as the number of CBB bored into fruits after 24 hours, quantified biocontrol efficiency (BCE) as the proportion of infesting CBB removed by ants, and estimated functional response from ant attack rates, measured as the difference in CBB infestation between branches. Infestation rates of CBB on branches with ants were significantly lower (71%-82%) than on those without ants across all density levels. Additionally, biocontrol efficiency was generally high and did not significantly vary across pest density treatments. Furthermore, ant attack rates increased linearly with increasing CBB density, suggesting a Type I functional response. These results demonstrate that ants can provide robust biological control of CBB, despite variation in pest density, and that the response of predators to pest density variation is an important factor in the provision of biocontrol services. Considering how natural enemies respond to changes in pest densities will allow for more accurate biocontrol predictions and better-informed management of this ecosystem service in agroecosystems. PMID:26562676
The most massive galaxies and black holes allowed by ΛCDM
NASA Astrophysics Data System (ADS)
Behroozi, Peter; Silk, Joseph
2018-07-01
Given a galaxy's stellar mass, its host halo mass has a lower limit from the cosmic baryon fraction and known baryonic physics. At z> 4, galaxy stellar mass functions place lower limits on halo number densities that approach expected Lambda Cold Dark Matter halo mass functions. High-redshift galaxy stellar mass functions can thus place interesting limits on number densities of massive haloes, which are otherwise very difficult to measure. Although halo mass functions at z < 8 are consistent with observed galaxy stellar masses if galaxy baryonic conversion efficiencies increase with redshift, JWST(James Webb Space Telescope) and WFIRST(Wide-Field InfraRed Survey Telescope) will more than double the redshift range over which useful constraints are available. We calculate maximum galaxy stellar masses as a function of redshift given expected halo number densities from ΛCDM. We apply similar arguments to black holes. If their virial mass estimates are accurate, number density constraints alone suggest that the quasars SDSS J1044-0125 and SDSS J010013.02+280225.8 likely have black hole mass to stellar mass ratios higher than the median z = 0 relation, confirming the expectation from Lauer bias. Finally, we present a public code to evaluate the probability of an apparently ΛCDM-inconsistent high-mass halo being detected given the combined effects of multiple surveys and observational errors.
NASA Astrophysics Data System (ADS)
Kassem, M.; Soize, C.; Gagliardini, L.
2009-06-01
In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.
Weck, Philippe F; Kim, Eunja
2014-12-07
The structure of dehydrated schoepite, α-UO2(OH)2, was investigated using computational approaches that go beyond standard density functional theory and include van der Waals dispersion corrections (DFT-D). Thermal properties of α-UO2(OH)2, were also obtained from phonon frequencies calculated with density functional perturbation theory (DFPT) including van der Waals dispersion corrections. While the isobaric heat capacity computed from first-principles reproduces available calorimetric data to within 5% up to 500 K, some entropy estimates based on calorimetric measurements for UO3·0.85H2O were found to overestimate by up to 23% the values computed in this study.
Breast density estimation from high spectral and spatial resolution MRI
Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.
2016-01-01
Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (p<0.0001) was obtained between left and right breast density estimations. An interclass correlation coefficient of 0.99 (p<0.0001) indicated high reliability for the inter-user variability of the HiSS-based breast density estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590
Using Geothermal Play Types as an Analogue for Estimating Potential Resource Size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terry, Rachel; Young, Katherine
Blind geothermal systems are becoming increasingly common as more geothermal fields are developed. Geothermal development is known to have high risk in the early stages of a project development because reservoir characteristics are relatively unknown until wells are drilled. Play types (or occurrence models) categorize potential geothermal fields into groups based on geologic characteristics. To aid in lowering exploration risk, these groups' reservoir characteristics can be used as analogues in new site exploration. The play type schemes used in this paper were Moeck and Beardsmore play types (Moeck et al. 2014) and Brophy occurrence models (Brophy et al. 2011). Operatingmore » geothermal fields throughout the world were classified based on their associated play type, and then reservoir characteristics data were catalogued. The distributions of these characteristics were plotted in histograms to develop probability density functions for each individual characteristic. The probability density functions can be used as input analogues in Monte Carlo estimations of resource potential for similar play types in early exploration phases. A spreadsheet model was created to estimate resource potential in undeveloped fields. The user can choose to input their own values for each reservoir characteristic or choose to use the probability distribution functions provided from the selected play type. This paper also addresses the United States Geological Survey's 1978 and 2008 assessment of geothermal resources by comparing their estimated values to reported values from post-site development. Information from the collected data was used in the comparison for thirty developed sites in the United States. No significant trends or suggestions for methodologies could be made by the comparison.« less
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
Nonlinear mixed effects modeling of gametocyte carriage in patients with uncomplicated malaria
2010-01-01
Background Gametocytes are the sexual form of the malaria parasite and the main agents of transmission. While there are several factors that influence host infectivity, the density of gametocytes appears to be the best single measure that is related to the human host's infectivity to mosquitoes. Despite the obviously important role that gametocytes play in the transmission of malaria and spread of anti-malarial resistance, it is common to estimate gametocyte carriage indirectly based on asexual parasite measurements. The objective of this research was to directly model observed gametocyte densities over time, during the primary infection. Methods Of 447 patients enrolled in sulphadoxine-pyrimethamine therapeutic efficacy studies in South Africa and Mozambique, a subset of 103 patients who had no gametocytes pre-treatment and who had at least three non-zero gametocyte densities over the 42-day follow up period were included in this analysis. Results A variety of different functions were examined. A modified version of the critical exponential function was selected for the final model given its robustness across different datasets and its flexibility in assuming a variety of different shapes. Age, site, initial asexual parasite density (logged to the base 10), and an empirical patient category were the co-variates that were found to improve the model. Conclusions A population nonlinear modeling approach seems promising and produced a flexible function whose estimates were stable across various different datasets. Surprisingly, dihydrofolate reductase and dihydropteroate synthetase mutation prevalence did not enter the model. This is probably related to a lack of power (quintuple mutations n = 12), and informative censoring; treatment failures were withdrawn from the study and given rescue treatment, usually prior to completion of follow up. PMID:20187935
Nonlinear mixed effects modeling of gametocyte carriage in patients with uncomplicated malaria.
Distiller, Greg B; Little, Francesca; Barnes, Karen I
2010-02-26
Gametocytes are the sexual form of the malaria parasite and the main agents of transmission. While there are several factors that influence host infectivity, the density of gametocytes appears to be the best single measure that is related to the human host's infectivity to mosquitoes. Despite the obviously important role that gametocytes play in the transmission of malaria and spread of anti-malarial resistance, it is common to estimate gametocyte carriage indirectly based on asexual parasite measurements. The objective of this research was to directly model observed gametocyte densities over time, during the primary infection. Of 447 patients enrolled in sulphadoxine-pyrimethamine therapeutic efficacy studies in South Africa and Mozambique, a subset of 103 patients who had no gametocytes pre-treatment and who had at least three non-zero gametocyte densities over the 42-day follow up period were included in this analysis. A variety of different functions were examined. A modified version of the critical exponential function was selected for the final model given its robustness across different datasets and its flexibility in assuming a variety of different shapes. Age, site, initial asexual parasite density (logged to the base 10), and an empirical patient category were the co-variates that were found to improve the model. A population nonlinear modeling approach seems promising and produced a flexible function whose estimates were stable across various different datasets. Surprisingly, dihydrofolate reductase and dihydropteroate synthetase mutation prevalence did not enter the model. This is probably related to a lack of power (quintuple mutations n = 12), and informative censoring; treatment failures were withdrawn from the study and given rescue treatment, usually prior to completion of follow up.
Variability of daily UV index in Jokioinen, Finland, in 1995-2015
NASA Astrophysics Data System (ADS)
Heikkilä, A.; Uusitalo, K.; Kärhä, P.; Vaskuri, A.; Lakkala, K.; Koskela, T.
2017-02-01
UV Index is a measure for UV radiation harmful for the human skin, developed and used to promote the sun awareness and protection of people. Monitoring programs conducted around the world have produced a number of long-term time series of UV irradiance. One of the longest time series of solar spectral UV irradiance in Europe has been obtained from the continuous measurements of Brewer #107 spectrophotometer in Jokioinen (lat. 60°44'N, lon. 23°30'E), Finland, over the years 1995-2015. We have used descriptive statistics and estimates of cumulative distribution functions, quantiles and probability density functions in the analysis of the time series of daily UV Index maxima. Seasonal differences in the estimated distributions and in the trends of the estimated quantiles are found.
Spectral Density of Laser Beam Scintillation in Wind Turbulence. Part 1; Theory
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1997-01-01
The temporal spectral density of the log-amplitude scintillation of a laser beam wave due to a spatially dependent vector-valued crosswind (deterministic as well as random) is evaluated. The path weighting functions for normalized spectral moments are derived, and offer a potential new technique for estimating the wind velocity profile. The Tatarskii-Klyatskin stochastic propagation equation for the Markov turbulence model is used with the solution approximated by the Rytov method. The Taylor 'frozen-in' hypothesis is assumed for the dependence of the refractive index on the wind velocity, and the Kolmogorov spectral density is used for the refractive index field.
Homogeneous buoyancy-generated turbulence
NASA Technical Reports Server (NTRS)
Batchelor, G. K.; Canuto, V. M.; Chasnov, J. R.
1992-01-01
Using a theoretical analysis of fundamental equations and a numerical simulation of the flow field, the statistically homogeneous motion that is generated by buoyancy forces after the creation of homogeneous random fluctuations in the density of infinite fluid at an initial instant is examined. It is shown that analytical results together with numerical results provide a comprehensive description of the 'birth, life, and death' of buoyancy-generated turbulence. Results of numerical simulations yielded the mean-square density mean-square velocity fluctuations and the associated spectra as functions of time for various initial conditions, and the time required for the mean-square density fluctuation to fall to a specified small value was estimated.
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
NASA Technical Reports Server (NTRS)
Zeng, X. C.; Stroud, D.
1989-01-01
The previously developed Ginzburg-Landau theory for calculating the crystal-melt interfacial tension of bcc elements to treat the classical one-component plasma (OCP), the charged fermion system, and the Bose crystal. For the OCP, a direct application of the theory of Shih et al. (1987) yields for the surface tension 0.0012(Z-squared e-squared/a-cubed), where Ze is the ionic charge and a is the radius of the ionic sphere. Bose crystal-melt interface is treated by a quantum extension of the classical density-functional theory, using the Feynman formalism to estimate the relevant correlation functions. The theory is applied to the metastable He-4 solid-superfluid interface at T = 0, with a resulting surface tension of 0.085 erg/sq cm, in reasonable agreement with the value extrapolated from the measured surface tension of the bcc solid in the range 1.46-1.76 K. These results suggest that the density-functional approach is a satisfactory mean-field theory for estimating the equilibrium properties of liquid-solid interfaces, given knowledge of the uniform phases.
N-point correlation functions in the CfA and SSRS redshift distribution of galaxies
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique
1992-01-01
Using counts in cells, we estimate the volume-average N-point galaxy correlation functions for N = 2, 3, and 4, in redshift samples of the CfA and SSRS catalogs. Volume-limited samples of different sizes are used to study the uncertainties at different scales, the shot noise, and the problem with the boundaries. The hierarchical constants S3 and S4 agree well in all samples in CfA and SSRS, with average S3 = 194 +/- 0.07 and S4 = 4.56 +/- 0.53. We compare these results with estimates obtained from angular catalogs and recent analysis over IRAS samples. The amplitudes SJ seem larger in real space than in redshift space, although the values from the angular analysis correspond to smaller scales, where we might expect larger nonperturbative effects. It is also found that S3 and S4 are smaller for IRAS than for optical galaxies. This, together with the fact that IRAS galaxies have smaller amplitude for the above correlation functions, indicates that the density fluctuations of IRAS galaxies cannot be simply proportional to the density fluctuations of optical galaxies, i.e., biasing has to be nonlinear between them.
Ant-inspired density estimation via random walks.
Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A
2017-10-03
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.
Relativistic Coulomb Excitation within the Time Dependent Superfluid Local Density Approximation
NASA Astrophysics Data System (ADS)
Stetcu, I.; Bertulani, C. A.; Bulgac, A.; Magierski, P.; Roche, K. J.
2015-01-01
Within the framework of the unrestricted time-dependent density functional theory, we present for the first time an analysis of the relativistic Coulomb excitation of the heavy deformed open shell nucleus 238U. The approach is based on the superfluid local density approximation formulated on a spatial lattice that can take into account coupling to the continuum, enabling self-consistent studies of superfluid dynamics of any nuclear shape. We compute the energy deposited in the target nucleus as a function of the impact parameter, finding it to be significantly larger than the estimate using the Goldhaber-Teller model. The isovector giant dipole resonance, the dipole pygmy resonance, and giant quadrupole modes are excited during the process. The one-body dissipation of collective dipole modes is shown to lead a damping width Γ↓≈0.4 MeV and the number of preequilibrium neutrons emitted has been quantified.
Relativistic Coulomb excitation within the time dependent superfluid local density approximation
Stetcu, I.; Bertulani, C. A.; Bulgac, A.; ...
2015-01-06
Within the framework of the unrestricted time-dependent density functional theory, we present for the first time an analysis of the relativistic Coulomb excitation of the heavy deformed open shell nucleus 238U. The approach is based on the superfluid local density approximation formulated on a spatial lattice that can take into account coupling to the continuum, enabling self-consistent studies of superfluid dynamics of any nuclear shape. We compute the energy deposited in the target nucleus as a function of the impact parameter, finding it to be significantly larger than the estimate using the Goldhaber-Teller model. The isovector giant dipole resonance, themore » dipole pygmy resonance, and giant quadrupole modes are excited during the process. As a result, the one-body dissipation of collective dipole modes is shown to lead a damping width Γ↓≈0.4 MeV and the number of preequilibrium neutrons emitted has been quantified.« less
Estimation of the probability of success in petroleum exploration
Davis, J.C.
1977-01-01
A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum Publishing Corp.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
Formalism for calculation of polymer-solvent-mediated potential
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2006-07-01
A simple theoretical approach is proposed for calculation of a solvent-mediated potential (SMP) between two colloid particles immersed in a polymer solvent bath in which the polymer is modeled as a chain with intramolecular degrees of freedom. The present recipe is only concerned with the estimation of the density profile of a polymer site around a single solute colloid particle instead of two solute colloid particles separated by a varying distance as done in existing calculational methods for polymer-SMP. Therefore the present recipe is far simpler for numerical implementation than the existing methods. The resultant predictions for the polymer-SMP and polymer solvent-mediated mean force (polymer-SMMF) are in very good agreement with available simulation data. With the present recipe, change tendencies of the contact value and second virial coefficiency of the SMP as a function of size ratio between the colloid particle and polymer site, the number of sites per chain, and the polymer concentration are investigated in detail. The metastable critical polymer concentration as a function of size ratio and the number of sites per chain is also reported for the first time. To yield the numerical solution of the present recipe at less than 1min on a personal computer, a rapid and accurate algorithm for the numerical solution of the classical density functional theory is proposed to supply rapid and accurate estimation of the density profile of the polymer site as an input into the present formalism.
USDA-ARS?s Scientific Manuscript database
Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...
The chemical reaction mechanism of NO addition to two β and δ isoprene hydroxy–peroxy radical isomers is examined in detail using density functional theory, coupled cluster methods, and the energy resolved master equation formalism to provide estimates of rate co...
Several studies have demonstrated association between gastrointestinal illness (GI) in swimmers and sewage pollution as measured by the density of indicator organisms, such as e. coli and enterococci, in recreational waters. These studies generally classify illnesses into two ca...
NASA Astrophysics Data System (ADS)
Tobochnik, Jan; Chapin, Phillip M.
1988-05-01
Monte Carlo simulations were performed for hard disks on the surface of an ordinary sphere and hard spheres on the surface of a four-dimensional hypersphere. Starting from the low density fluid the density was increased to obtain metastable amorphous states at densities higher than previously achieved. Above the freezing density the inverse pressure decreases linearly with density, reaching zero at packing fractions equal to 68% for hard spheres and 84% for hard disks. Using these new estimates for random closest packing and coefficients from the virial series we obtain an equation of state which fits all the data up to random closest packing. Usually, the radial distribution function showed the typical split second peak characteristic of amorphous solids and glasses. High density systems which lacked this split second peak and showed other sharp peaks were interpreted as signaling the onset of crystal nucleation.
Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian
2015-06-01
We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Ruiz-González, Ana Daría
2017-01-01
The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard. PMID:28448524
Characterizing Fishing Effort and Spatial Extent of Coastal Fisheries
Stewart, Kelly R.; Lewison, Rebecca L.; Dunn, Daniel C.; Bjorkland, Rhema H.; Kelez, Shaleyla; Halpin, Patrick N.; Crowder, Larry B.
2010-01-01
Biodiverse coastal zones are often areas of intense fishing pressure due to the high relative density of fishing capacity in these nearshore regions. Although overcapacity is one of the central challenges to fisheries sustainability in coastal zones, accurate estimates of fishing pressure in coastal zones are limited, hampering the assessment of the direct and collateral impacts (e.g., habitat degradation, bycatch) of fishing. We compiled a comprehensive database of fishing effort metrics and the corresponding spatial limits of fisheries and used a spatial analysis program (FEET) to map fishing effort density (measured as boat-meters per km2) in the coastal zones of six ocean regions. We also considered the utility of a number of socioeconomic variables as indicators of fishing pressure at the national level; fishing density increased as a function of population size and decreased as a function of coastline length. Our mapping exercise points to intra and interregional ‘hotspots’ of coastal fishing pressure. The significant and intuitive relationships we found between fishing density and population size and coastline length may help with coarse regional characterizations of fishing pressure. However, spatially-delimited fishing effort data are needed to accurately map fishing hotspots, i.e., areas of intense fishing activity. We suggest that estimates of fishing effort, not just target catch or yield, serve as a necessary measure of fishing activity, which is a key link to evaluating sustainability and environmental impacts of coastal fisheries. PMID:21206903
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope
NASA Astrophysics Data System (ADS)
Hamed Mashhadzadeh, A.; Fereidoon, Ab.; Ghorbanzadeh Ahangari, M.
2017-10-01
In current study we combined theoretical and experimental studies to evaluate the effect of functionalization and silanization on mechanical behavior of polymer-based/CNT nanocomposites. Epoxy was selected as thermoset polymer, polypropylene and poly vinyl chloride were selected as thermoplastic polymers. The whole procedure is divided to two sections . At first we applied density functional theory (DFT) to analyze the effect of functionalization on equilibrium distance and adsorption energy of unmodified, functionalized by sbnd OH group and silanized epoxy/CNT, PP/CNT and PVC/CNT nanocomposites and the results showed that functionalization increased adsorption energy and reduced the equilibrium distance in all studied nanocomposites and silanization had higher effect comparing to OH functionalizing. Then we prepared experimental samples of all mentioned nanocomposites and tested their tensile and flexural strength properties. The obtained results showed that functionalization increased the studied mechanical properties in all evaluated nanocomposites. Finally we compared the results of experimental and theoretical sections with each other and estimated a suitable agreement between these parts.
Notes on a New Coherence Estimator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bickel, Douglas L.
This document discusses some interesting features of the new coherence estimator in [1] . The estimator is d erived from a slightly different viewpoint. We discuss a few properties of the estimator, including presenting the probability density function of the denominator of the new estimator , which is a new feature of this estimator . Finally, we present an appr oximate equation for analysis of the sensitivity of the estimator to the knowledge of the noise value. ACKNOWLEDGEMENTS The preparation of this report is the result of an unfunded research and development activity. Sandia National Laboratories is a multi -more » program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.« less
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021
NASA Astrophysics Data System (ADS)
Chandler, Damon M.; Field, David J.
2007-04-01
Natural scenes, like most all natural data sets, show considerable redundancy. Although many forms of redundancy have been investigated (e.g., pixel distributions, power spectra, contour relationships, etc.), estimates of the true entropy of natural scenes have been largely considered intractable. We describe a technique for estimating the entropy and relative dimensionality of image patches based on a function we call the proximity distribution (a nearest-neighbor technique). The advantage of this function over simple statistics such as the power spectrum is that the proximity distribution is dependent on all forms of redundancy. We demonstrate that this function can be used to estimate the entropy (redundancy) of 3×3 patches of known entropy as well as 8×8 patches of Gaussian white noise, natural scenes, and noise with the same power spectrum as natural scenes. The techniques are based on assumptions regarding the intrinsic dimensionality of the data, and although the estimates depend on an extrapolation model for images larger than 3×3, we argue that this approach provides the best current estimates of the entropy and compressibility of natural-scene patches and that it provides insights into the efficiency of any coding strategy that aims to reduce redundancy. We show that the sample of 8×8 patches of natural scenes used in this study has less than half the entropy of 8×8 white noise and less than 60% of the entropy of noise with the same power spectrum. In addition, given a finite number of samples (<220) drawn randomly from the space of 8×8 patches, the subspace of 8×8 natural-scene patches shows a dimensionality that depends on the sampling density and that for low densities is significantly lower dimensional than the space of 8×8 patches of white noise and noise with the same power spectrum.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Alasdair; Thomsen, Edwin; Reed, David
2016-04-20
A chemistry agnostic cost performance model is described for a nonaqueous flow battery. The model predicts flow battery performance by estimating the active reaction zone thickness at each electrode as a function of current density, state of charge, and flow rate using measured data for electrode kinetics, electrolyte conductivity, and electrode-specific surface area. Validation of the model is conducted using a 4kW stack data at various current densities and flow rates. This model is used to estimate the performance of a nonaqueous flow battery with electrode and electrolyte properties used from the literature. The optimized cost for this system ismore » estimated for various power and energy levels using component costs provided by vendors. The model allows optimization of design parameters such as electrode thickness, area, flow path design, and operating parameters such as power density, flow rate, and operating SOC range for various application duty cycles. A parametric analysis is done to identify components and electrode/electrolyte properties with the highest impact on system cost for various application durations. A pathway to 100$kWh -1 for the storage system is identified.« less
Geometrical Description in Binary Composites and Spectral Density Representation
Tuncer, Enis
2010-01-01
In this review, the dielectric permittivity of dielectric mixtures is discussed in view of the spectral density representation method. A distinct representation is derived for predicting the dielectric properties, permittivities ε, of mixtures. The presentation of the dielectric properties is based on a scaled permittivity approach, ξ=(εe-εm)(εi-εm)-1, where the subscripts e, m and i denote the dielectric permittivities of the effective, matrix and inclusion media, respectively [Tuncer, E. J. Phys.: Condens. Matter 2005, 17, L125]. This novel representation transforms the spectral density formalism to a form similar to the distribution of relaxation times method of dielectric relaxation. Consequently, I propose that any dielectric relaxation formula, i.e., the Havriliak-Negami empirical dielectric relaxation expression, can be adopted as a scaled permittivity. The presented scaled permittivity representation has potential to be improved and implemented into the existing data analyzing routines for dielectric relaxation; however, the information to extract would be the topological/morphological description in mixtures. To arrive at the description, one needs to know the dielectric properties of the constituents and the composite prior to the spectral analysis. To illustrate the strength of the representation and confirm the proposed hypothesis, the Landau-Lifshitz/Looyenga (LLL) [Looyenga, H. Physica 1965, 31, 401] expression is selected. The structural information of a mixture obeying LLL is extracted for different volume fractions of phases. Both an in-house computational tool based on the Monte Carlo method to solve inverse integral transforms and the proposed empirical scaled permittivity expression are employed to estimate the spectral density function of the LLL expression. The estimated spectral functions for mixtures with different inclusion concentration compositions show similarities; they are composed of a couple of bell-shaped distributions, with coinciding peak locations but different heights. It is speculated that the coincidence in the peak locations is an absolute illustration of the self-similar fractal nature of the mixture topology (structure) created with the LLL expression. Consequently, the spectra are not altered significantly with increased filler concentration level—they exhibit a self-similar spectral density function for different concentration levels. Last but not least, the estimated percolation strengths also confirm the fractal nature of the systems characterized by the LLL mixture expression. It is concluded that the LLL expression is suitable for complex composite systems that have hierarchical order in their structure. These observations confirm the finding in the literature.
Electronegativity estimator built on QTAIM-based domains of the bond electron density.
Ferro-Costas, David; Pérez-Juste, Ignacio; Mosquera, Ricardo A
2014-05-15
The electron localization function, natural localized molecular orbitals, and the quantum theory of atoms in molecules have been used all together to analyze the bond electron density (BED) distribution of different hydrogen-containing compounds through the definition of atomic contributions to the bonding regions. A function, gAH , obtained from those contributions is analyzed along the second and third periods of the periodic table. It exhibits periodic trends typically assigned to the electronegativity (χ), and it is also sensitive to hybridization variations. This function also shows an interesting S shape with different χ-scales, Allred-Rochow's being the one exhibiting the best monotonical increase with regard to the BED taken by each atom of the bond. Therefore, we think this χ can be actually related to the BED distribution. Copyright © 2014 Wiley Periodicals, Inc.
Demonstration of line transect methodologies to estimate urban gray squirrel density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hein, E.W.
1997-11-01
Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimatingmore » urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.« less
Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki
2017-01-01
This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974
First Principles Electronic Structure of Mn doped GaAs, GaP, and GaN Semiconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulthess, Thomas C; Temmerman, Walter M; Szotek, Zdzislawa
We present first-principles electronic structure calculations of Mn doped III-V semiconductors based on the local spin-density approximation (LSDA) as well as the self-interaction corrected local spin density method (SIC-LSD). We find that it is crucial to use a self-interaction free approach to properly describe the electronic ground state. The SIC-LSD calculations predict the proper electronic ground state configuration for Mn in GaAs, GaP, and GaN. Excellent quantitative agreement with experiment is found for magnetic moment and p-d exchange in (GaMn)As. These results allow us to validate commonly used models for magnetic semiconductors. Furthermore, we discuss the delicate problem of extractingmore » binding energies of localized levels from density functional theory calculations. We propose three approaches to take into account final state effects to estimate the binding energies of the Mn-d levels in GaAs. We find good agreement between computed values and estimates from photoemisison experiments.« less
NASA Technical Reports Server (NTRS)
Rudy, Donald J.; Muhleman, Duane O.; Berge, Glenn L.; Jakosky, Bruce M.; Christensen, Philip R.
1987-01-01
Calculations based on 2- and 6-cm observations of Mars with the A configuration of the VLA have yielded a whole-disk effective dielectric constant of 2.34 + or - 0.05, implying a subsurface density of 1.24 + or - 0.11 g/cu cm at 2 cm, as well as 1.45 + or - 0.10 g/cu cm effective density and 2.70 + or - 0.10 dielectric constant at 6 cm. These parameters have also been estimated as a function of latitude over the 15 deg S - 60 deg N range; subsurface radio absorption length was estimated to be about 15 wavelengths at most of these latitudes. Most of the subsurface density calculations yielded results in the 1-2-g/cu cm range, implying that the subsurface is not very different from the surface observed by Viking and Mariner spacecraft; the decrease in correlation with depth is in keeping with slow variation of the subsurface in the near-subsurface region.
Estimating tropical-forest density profiles from multibaseline interferometric SAR
NASA Technical Reports Server (NTRS)
Treuhaft, Robert; Chapman, Bruce; dos Santos, Joao Roberto; Dutra, Luciano; Goncalves, Fabio; da Costa Freitas, Corina; Mura, Jose Claudio; de Alencastro Graca, Paulo Mauricio
2006-01-01
Vertical profiles of forest density are potentially robust indicators of forest biomass, fire susceptibility and ecosystem function. Tropical forests, which are among the most dense and complicated targets for remote sensing, contain about 45% of the world's biomass. Remote sensing of tropical forest structure is therefore an important component to global biomass and carbon monitoring. This paper shows preliminary results of a multibasline interfereomtric SAR (InSAR) experiment over primary, secondary, and selectively logged forests at La Selva Biological Station in Costa Rica. The profile shown results from inverse Fourier transforming 8 of the 18 baselines acquired. A profile is shown compared to lidar and field measurements. Results are highly preliminary and for qualitative assessment only. Parameter estimation will eventually replace Fourier inversion as the means to producing profiles.
Effects of LiDAR point density and landscape context on estimates of urban forest biomass
NASA Astrophysics Data System (ADS)
Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.
2015-03-01
Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.
Haskell, Craig A; Beauchamp, David A; Bollens, Stephen M
2017-01-01
Juvenile salmon (Oncorhynchus spp.) use of reservoir food webs is understudied. We examined the feeding behavior of subyearling Chinook salmon (O. tshawytscha) and its relation to growth by estimating the functional response of juvenile salmon to changes in the density of Daphnia, an important component of reservoir food webs. We then estimated salmon growth across a broad range of water temperatures and daily rations of two primary prey, Daphnia and juvenile American shad (Alosa sapidissima) using a bioenergetics model. Laboratory feeding experiments yielded a Type-II functional response curve: C = 29.858 P *(4.271 + P)-1 indicating that salmon consumption (C) of Daphnia was not affected until Daphnia densities (P) were < 30 · L-1. Past field studies documented Daphnia densities in lower Columbia River reservoirs of < 3 · L-1 in July but as high as 40 · L-1 in August. Bioenergetics modeling indicated that subyearlings could not achieve positive growth above 22°C regardless of prey type or consumption rate. When feeding on Daphnia, subyearlings could not achieve positive growth above 20°C (water temperatures they commonly encounter in the lower Columbia River during summer). At 16-18°C, subyearlings had to consume about 27,000 Daphnia · day-1 to achieve positive growth. However, when feeding on juvenile American shad, subyearlings had to consume 20 shad · day-1 at 16-18°C, or at least 25 shad · day-1 at 20°C to achieve positive growth. Using empirical consumption rates and water temperatures from summer 2013, subyearlings exhibited negative growth during July (-0.23 to -0.29 g · d-1) and August (-0.05 to -0.07 g · d-1). By switching prey from Daphnia to juvenile shad which have a higher energy density, subyearlings can partially compensate for the effects of higher water temperatures they experience in the lower Columbia River during summer. However, achieving positive growth as piscivores requires subyearlings to feed at higher consumption rates than they exhibited empirically. While our results indicate compromised growth in reservoir habitats, the long-term repercussions to salmon populations in the Columbia River Basin are unknown.
Ant-inspired density estimation via random walks
Musco, Cameron; Su, Hsin-Hao
2017-01-01
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146
Domke, Grant M.; Woodall, Christopher W.; Walters, Brian F.; Smith, James E.
2013-01-01
The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.’s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events. PMID:23544112
Domke, Grant M; Woodall, Christopher W; Walters, Brian F; Smith, James E
2013-01-01
The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.'s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events.
Characterization of Cloud Water-Content Distribution
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2010-01-01
The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.
An Application of the H-Function to Curve-Fitting and Density Estimation.
1983-12-01
equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability
Mars surface radiation exposure for solar maximum conditions and 1989 solar proton events
NASA Technical Reports Server (NTRS)
Simonsen, Lisa C.; Nealy, John E.
1992-01-01
The Langley heavy-ion/nucleon transport code, HZETRN, and the high-energy nucleon transport code, BRYNTRN, are used to predict the propagation of galactic cosmic rays (GCR's) and solar flare protons through the carbon dioxide atmosphere of Mars. Particle fluences and the resulting doses are estimated on the surface of Mars for GCR's during solar maximum conditions and the Aug., Sep., and Oct. 1989 solar proton events. These results extend previously calculated surface estimates for GCR's at solar minimum conditions and the Feb. 1956, Nov. 1960, and Aug. 1972 solar proton events. Surface doses are estimated with both a low-density and a high-density carbon dioxide model of the atmosphere for altitudes of 0, 4, 8, and 12 km above the surface. A solar modulation function is incorporated to estimate the GCR dose variation between solar minimum and maximum conditions over the 11-year solar cycle. By using current Mars mission scenarios, doses to the skin, eye, and blood-forming organs are predicted for short- and long-duration stay times on the Martian surface throughout the solar cycle.
Tarjan, Lily M; Tinker, M. Tim
2016-01-01
Parametric and nonparametric kernel methods dominate studies of animal home ranges and space use. Most existing methods are unable to incorporate information about the underlying physical environment, leading to poor performance in excluding areas that are not used. Using radio-telemetry data from sea otters, we developed and evaluated a new algorithm for estimating home ranges (hereafter Permissible Home Range Estimation, or “PHRE”) that reflects habitat suitability. We began by transforming sighting locations into relevant landscape features (for sea otters, coastal position and distance from shore). Then, we generated a bivariate kernel probability density function in landscape space and back-transformed this to geographic space in order to define a permissible home range. Compared to two commonly used home range estimation methods, kernel densities and local convex hulls, PHRE better excluded unused areas and required a smaller sample size. Our PHRE method is applicable to species whose ranges are restricted by complex physical boundaries or environmental gradients and will improve understanding of habitat-use requirements and, ultimately, aid in conservation efforts.
Polystyrene Foam EOS as a Function of Porosity and Fill Gas
NASA Astrophysics Data System (ADS)
Mulford, Roberta; Swift, Damian
2009-06-01
An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam, Differences between air-filled, nitrogen-blown, and CO2-blown foams are investigated, to estimate the importance of allowing air to react with plastic products during decomposition. Results differ somewhat from the conventional EOS, which are generated from values for plastic extrapolated to low densities.
Morales, Miguel A; Pierleoni, Carlo; Schwegler, Eric; Ceperley, D M
2010-07-20
Using quantum simulation techniques based on either density functional theory or quantum Monte Carlo, we find clear evidence of a first-order transition in liquid hydrogen, between a low conductivity molecular state and a high conductivity atomic state. Using the temperature dependence of the discontinuity in the electronic conductivity, we estimate the critical point of the transition at temperatures near 2,000 K and pressures near 120 GPa. Furthermore, we have determined the melting curve of molecular hydrogen up to pressures of 200 GPa, finding a reentrant melting line. The melting line crosses the metalization line at 700 K and 220 GPa using density functional energetics and at 550 K and 290 GPa using quantum Monte Carlo energetics.
NASA Astrophysics Data System (ADS)
da Silva Filho, J. G.; Freire, V. N.; Caetano, E. W. S.; Ladeira, L. O.; Fulco, U. L.; Albuquerque, E. L.
2013-11-01
In this letter, we study the electronic structure and optical properties of the active medicinal component γ-aminobutyric acid (GABA) and its cocrystals with oxalic (OXA) and benzoic (BZA) acid by means of the density functional theory formalism. It is shown that the cocrystallization strongly weakens the zwitterionic character of the GABA molecule leading to striking differences among the electronic band structures and optical absorption spectra of the GABA crystal and GABA:OXA, GABA:BZA cocrystals, originating from distinct sets of hydrogen bonds. Calculated band widths and Δ-sol band gap estimates indicate that both GABA and GABA:OXA, GABA:BZA cocrystals are indirect gap insulators.
NASA Astrophysics Data System (ADS)
Alizadeh, M.; Schuh, H.; Schmidt, M. G.
2012-12-01
In the last decades Global Navigation Satellite System (GNSS) has turned into a promising tool for probing the ionosphere. The classical input data for developing Global Ionosphere Maps (GIM) is obtained from the dual-frequency GNSS observations. Simultaneous observations of GNSS code or carrier phase at each frequency is used to form a geometric-free linear combination which contains only the ionospheric refraction term and the differential inter-frequency hardware delays. To relate the ionospheric observable to the electron density, a model is used that represents an altitude-dependent distribution of the electron density. This study aims at developing a global multi-dimensional model of the electron density using simulated GNSS observations from about 150 International GNSS Service (IGS) ground stations. Due to the fact that IGS stations are in-homogenously distributed around the world and the accuracy and reliability of the developed models are considerably lower in the area not well covered with IGS ground stations, the International Reference Ionosphere (IRI) model has been used as a background model. The correction term is estimated by applying spherical harmonics expansion to the GNSS ionospheric observable. Within this study this observable is related to the electron density using different functions for the bottom-side and top-side ionosphere. The bottom-side ionosphere is represented by an alpha-Chapman function and the top-side ionosphere is represented using the newly proposed Vary-Chap function.aximum electron density, IRI background model (elec/m3), day 202 - 2010, 0 UT eight of maximum electron density, IRI background model (km), day 202 - 2010, 0 UT
Precision Orbit Derived Atmospheric Density: Development and Performance
NASA Astrophysics Data System (ADS)
McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.
2012-09-01
Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.
Estimation of Damage Costs Associated with Flood Events
NASA Astrophysics Data System (ADS)
Andrews, T. A.; Wauthier, C.; Zipp, K.
2017-12-01
This study investigates the possibility of creating a mathematical function that enables the estimation of flood-damage costs. We begin by examining the costs associated with past flood events in the United States. The data on these tropical storms and hurricanes are provided by the National Oceanic and Atmospheric Administration. With the location, extent of flooding, and damage reparation costs identified, we analyze variables such as: number of inches rained, land elevation, type of landscape, region development in regards to building density and infrastructure, and population concentration. We seek to identify the leading drivers of high flood-damage costs and understand which variables play a large role in the costliness of these weather events. Upon completion of our mathematical analysis, we turn out attention to the 2017 natural disaster of Texas. We divide the region, as we did above, by land elevation, type of landscape, region development in regards to building density and infrastructure, and population concentration. Then, we overlay the number of inches rained in those regions onto the divided landscape and apply our function. We hope to use these findings to estimate the potential flood-damage costs of Hurricane Harvey. This information is then transformed into a hazard map that could provide citizens and businesses of flood-stricken zones additional resources for their insurance selection process.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
Tracing the Magnetic Field of IRDC G028.23-00.19 Using NIR Polarimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoq, Sadia; Clemens, D. P.; Cashman, Lauren R.
2017-02-20
The importance of the magnetic ( B ) field in the formation of infrared dark clouds (IRDCs) and massive stars is an ongoing topic of investigation. We studied the plane-of-sky B field for one IRDC, G028.23-00.19, to understand the interaction between the field and the cloud. We used near-IR background starlight polarimetry to probe the B field and performed several observational tests to assess the field importance. The polarimetric data, taken with the Mimir instrument, consisted of H -band and K -band observations, totaling 17,160 stellar measurements. We traced the plane-of-sky B -field morphology with respect to the sky-projected cloudmore » elongation. We also found the relationship between the estimated B -field strength and gas volume density, and we computed estimates of the normalized mass-to-magnetic flux ratio. The B -field orientation with respect to the cloud did not show a preferred alignment, but it did exhibit a large-scale pattern. The plane-of-sky B -field strengths ranged from 10 to 165 μ G, and the B -field strength dependence on density followed a power law with an index consistent with 2/3. The mass-to-magnetic flux ratio also increased as a function of density. The relative orientations and relationship between the B field and density imply that the B field was not dynamically important in the formation of the IRDC. The increase in mass-to-flux ratio as a function of density, though, indicates a dynamically important B field. Therefore, it is unclear whether the B field influenced the formation of G28.23. However, it is likely that the presence of the IRDC changed the local B -field morphology.« less
The Next Generation of Mars-GRAM and Its Role in the Autonomous Aerobraking Development Plan
NASA Technical Reports Server (NTRS)
Justh, Hilary L.; Justus, Carl G.; Ramey, Holly S.
2011-01-01
The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM 2010 is currently being used to develop the onboard atmospheric density estimator that is part of the Autonomous Aerobraking Development Plan. In previous versions, Mars-GRAM was less than realistic when used for sensitivity studies for Thermal Emission Spectrometer (TES) MapYear=0 and large optical depth values, such as tau=3. A comparison analysis has been completed between Mars-GRAM, TES and data from the Planetary Data System (PDS) resulting in updated coefficients for the functions relating density, latitude, and longitude of the sun. The adjustment factors are expressed as a function of height (z), Latitude (Lat) and areocentric solar longitude (Ls). The latest release of Mars-GRAM 2010 includes these adjustment factors that alter the in-put data from MGCM and MTGCM for the Mapping Year 0 (user-controlled dust) case. The greatest adjustment occurs at large optical depths such as tau greater than 1. The addition of the adjustment factors has led to better correspondence to TES Limb data from 0-60 km as well as better agreement with MGS, ODY and MRO data at approximately 90-135 km. Improved simulations utilizing Mars-GRAM 2010 are vital to developing the onboard atmospheric density estimator for the Autonomous Aerobraking Development Plan. Mars-GRAM 2010 was not the only planetary GRAM utilized during phase 1 of this plan; Titan-GRAM and Venus-GRAM were used to generate density data sets for Aerobraking Design Reference Missions. These data sets included altitude profiles (both vertical and along a trajectory), GRAM perturbations (tides, gravity waves, etc.) and provided density and scale height values for analysis by other Autonomous Aero-braking team members.
Lindsay, A E; Spoonmore, R T; Tzou, J C
2016-10-01
A hybrid asymptotic-numerical method is presented for obtaining an asymptotic estimate for the full probability distribution of capture times of a random walker by multiple small traps located inside a bounded two-dimensional domain with a reflecting boundary. As motivation for this study, we calculate the variance in the capture time of a random walker by a single interior trap and determine this quantity to be comparable in magnitude to the mean. This implies that the mean is not necessarily reflective of typical capture times and that the full density must be determined. To solve the underlying diffusion equation, the method of Laplace transforms is used to obtain an elliptic problem of modified Helmholtz type. In the limit of vanishing trap sizes, each trap is represented as a Dirac point source that permits the solution of the transform equation to be represented as a superposition of Helmholtz Green's functions. Using this solution, we construct asymptotic short-time solutions of the first-passage-time density, which captures peaks associated with rapid capture by the absorbing traps. When numerical evaluation of the Helmholtz Green's function is employed followed by numerical inversion of the Laplace transform, the method reproduces the density for larger times. We demonstrate the accuracy of our solution technique with a comparison to statistics obtained from a time-dependent solution of the diffusion equation and discrete particle simulations. In particular, we demonstrate that the method is capable of capturing the multimodal behavior in the capture time density that arises when the traps are strategically arranged. The hybrid method presented can be applied to scenarios involving both arbitrary domains and trap shapes.
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salama, A.; Mikhail, M.
Comprehensive software packages have been developed at the Western Research Centre as tools to help coal preparation engineers analyze, evaluate, and control coal cleaning processes. The COal Preparation Software package (COPS) performs three functions: (1) data handling and manipulation, (2) data analysis, including the generation of washability data, performance evaluation and prediction, density and size modeling, evaluation of density and size partition characteristics and attrition curves, and (3) generation of graphics output. The Separation ChARacteristics Estimation software packages (SCARE) are developed to balance raw density or size separation data. The cases of density and size separation data are considered. Themore » generated balanced data can take the balanced or normalized forms. The scaled form is desirable for direct determination of the partition functions (curves). The raw and generated separation data are displayed in tabular and/or graphical forms. The computer softwares described in this paper are valuable tools for coal preparation plant engineers and operators for evaluating process performance, adjusting plant parameters, and balancing raw density or size separation data. These packages have been applied very successfully in many projects carried out by WRC for the Canadian coal preparation industry. The software packages are designed to run on a personal computer (PC).« less
Superfluidity in Strongly Interacting Fermi Systems with Applications to Neutron Stars
NASA Astrophysics Data System (ADS)
Khodel, Vladimir
The rotational dynamics and cooling history of neutron stars is influenced by the superfluid properties of nucleonic matter. In this thesis a novel separation technique is applied to the analysis of the gap equation for neutron matter. It is shown that the problem can be recast into two tasks: solving a simple system of linear integral equations for the shape functions of various components of the gap function and solving a system of non-linear algebraic equations for their scale factors. Important simplifications result from the fact that the ratio of the gap amplitude to the Fermi energy provides a small parameter in this problem. The relationship between the analytic structure of the shape functions and the density interval for the existence of superfluid gap is discussed. It is shown that in 1S0 channel the position of the first zero of the shape function gives an estimate of the upper critical density. The relation between the resonant behavior of the two-neutron interaction in this channel and the density dependence of the gap is established. The behavior of the gap in the limits of low and high densities is analyzed. Various approaches to calculation of the scale factors are considered: model cases, angular averaging, and perturbation theory. An optimization-based approach is proposed. The shape functions and scale factors for Argonne υ14 and υ18 potentials are determined in singlet and triplet channels. Dependence of the solution on the value of effective mass and medium polarization is studied.
Gibbs measures based on 1d (an)harmonic oscillators as mean-field limits
NASA Astrophysics Data System (ADS)
Lewin, Mathieu; Nam, Phan Thành; Rougerie, Nicolas
2018-04-01
We prove that Gibbs measures based on 1D defocusing nonlinear Schrödinger functionals with sub-harmonic trapping can be obtained as the mean-field/large temperature limit of the corresponding grand-canonical ensemble for many bosons. The limit measure is supported on Sobolev spaces of negative regularity, and the corresponding density matrices are not trace-class. The general proof strategy is that of a previous paper of ours, but we have to complement it with Hilbert-Schmidt estimates on reduced density matrices.
NASA Astrophysics Data System (ADS)
Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel
2017-05-01
The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.
Fransson, Thomas; Saue, Trond; Norman, Patrick
2016-05-10
The influences of group 12 (Zn, Cd, Hg) metal-substitution on the valence spectra and phosphorescence parameters of porphyrins (P) have been investigated in a relativistic setting. In order to obtain valence spectra, this study reports the first application of the damped linear response function, or complex polarization propagator, in the four-component density functional theory framework [as formulated in Villaume et al. J. Chem. Phys. 2010 , 133 , 064105 ]. It is shown that the steep increase in the density of states as due to the inclusion of spin-orbit coupling yields only minor changes in overall computational costs involved with the solution of the set of linear response equations. Comparing single-frequency to multifrequency spectral calculations, it is noted that the number of iterations in the iterative linear equation solver per frequency grid-point decreases monotonously from 30 to 0.74 as the number of frequency points goes from one to 19. The main heavy-atom effect on the UV/vis-absorption spectra is indirect and attributed to the change of point group symmetry due to metal-substitution, and it is noted that substitutions using heavier atoms yield small red-shifts of the intense Soret-band. Concerning phosphorescence parameters, the adoption of a four-component relativistic setting enables the calculation of such properties at a linear order of response theory, and any higher-order response functions do not need to be considered-a real, conventional, form of linear response theory has been used for the calculation of these parameters. For the substituted porphyrins, electronic coupling between the lowest triplet states is strong and results in theoretical estimates of lifetimes that are sensitive to the wave function and electron density parametrization. With this in mind, we report our best estimates of the phosphorescence lifetimes to be 460, 13.8, 11.2, and 0.00155 s for H2P, ZnP, CdP, and HgP, respectively, with the corresponding transition energies being equal to 1.46, 1.50, 1.38, and 0.89 eV.
NASA Technical Reports Server (NTRS)
Sittler, Edward C., Jr.; Guhathakurta, Madhulika
1999-01-01
We have developed a two-dimensional semiempirical MHD model of the solar corona and solar wind. The model uses empirically derived electron density profiles from white-light coronagraph data measured during the Skylub period and an empirically derived model of the magnetic field which is fitted to observed streamer topologies, which also come from the white-light coronagraph data The electron density model comes from that developed by Guhathakurta and coworkers. The electron density model is extended into interplanetary space by using electron densities derived from the Ulysses plasma instrument. The model also requires an estimate of the solar wind velocity as a function of heliographic latitude and radial component of the magnetic field at 1 AU, both of which can be provided by the Ulysses spacecraft. The model makes estimates as a function of radial distance and latitude of various fluid parameters of the plasma such as flow velocity V, effective temperature T(sub eff), and effective heat flux q(sub eff), which are derived from the equations of conservation of mass, momentum, and energy, respectively. The term effective indicates that wave contributions could be present. The model naturally provides the spiral pattern of the magnetic field far from the Sun and an estimate of the large-scale surface magnetic field at the Sun, which we estimate to be approx. 12 - 15 G. The magnetic field model shows that the large-scale surface magnetic field is dominated by an octupole term. The model is a steady state calculation which makes the assumption of azimuthal symmetry and solves the various conservation equations in the rotating frame of the Sun. The conservation equations are integrated along the magnetic field direction in the rotating frame of the Sun, thus providing a nearly self-consistent calculation of the fluid parameters. The model makes a minimum number of assumptions about the physics of the solar corona and solar wind and should provide a very accurate empirical description of the solar corona and solar wind Once estimates of mass density rho, flow velocity V, effective temperature T(sub eff), effective heat flux q(sub eff), and magnetic field B are computed from the model and waves are assumed unimportant, all other plasma parameters such as Mach number, Alfven speed, gyrofrequency, etc. can be derived as a function of radial distance and latitude from the Sun. The model can be used as a planning tool for such missions as Slar Probe and provide an empirical framework for theoretical models of the solar corona and solar wind The model will be used to construct a semiempirical MHD description of the steady state solar corona and solar wind using the SOHO Large Angle Spectrometric Coronagraph (LASCO) polarized brightness white-light coronagraph data, SOHO Extreme Ultraviolet Imaging Telescope data, and Ulysses plasma data.
Extreme Mean and Its Applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.
1979-01-01
Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
NASA Astrophysics Data System (ADS)
Rogers, Keir K.; Bird, Simeon; Peiris, Hiranya V.; Pontzen, Andrew; Font-Ribera, Andreu; Leistedt, Boris
2018-03-01
We measure the effect of high column density absorbing systems of neutral hydrogen (H I) on the one-dimensional (1D) Lyman α forest flux power spectrum using cosmological hydrodynamical simulations from the Illustris project. High column density absorbers (which we define to be those with H I column densities N(H I) > 1.6 × 10^{17} atoms cm^{-2}) cause broadened absorption lines with characteristic damping wings. These damping wings bias the 1D Lyman α forest flux power spectrum by causing absorption in quasar spectra away from the location of the absorber itself. We investigate the effect of high column density absorbers on the Lyman α forest using hydrodynamical simulations for the first time. We provide templates as a function of column density and redshift, allowing the flexibility to accurately model residual contamination, i.e. if an analysis selectively clips out the largest damping wings. This flexibility will improve cosmological parameter estimation, for example, allowing more accurate measurement of the shape of the power spectrum, with implications for cosmological models containing massive neutrinos or a running of the spectral index. We provide fitting functions to reproduce these results so that they can be incorporated straightforwardly into a data analysis pipeline.
Mechanisms of jamming in the Nagel-Schreckenberg model for traffic flow.
Bette, Henrik M; Habel, Lars; Emig, Thorsten; Schreckenberg, Michael
2017-01-01
We study the Nagel-Schreckenberg cellular automata model for traffic flow by both simulations and analytical techniques. To better understand the nature of the jamming transition, we analyze the fraction of stopped cars P(v=0) as a function of the mean car density. We present a simple argument that yields an estimate for the free density where jamming occurs, and show satisfying agreement with simulation results. We demonstrate that the fraction of jammed cars P(v∈{0,1}) can be decomposed into the three factors (jamming rate, jam lifetime, and jam size) for which we derive, from random walk arguments, exponents that control their scaling close to the critical density.
Mechanisms of jamming in the Nagel-Schreckenberg model for traffic flow
NASA Astrophysics Data System (ADS)
Bette, Henrik M.; Habel, Lars; Emig, Thorsten; Schreckenberg, Michael
2017-01-01
We study the Nagel-Schreckenberg cellular automata model for traffic flow by both simulations and analytical techniques. To better understand the nature of the jamming transition, we analyze the fraction of stopped cars P (v =0 ) as a function of the mean car density. We present a simple argument that yields an estimate for the free density where jamming occurs, and show satisfying agreement with simulation results. We demonstrate that the fraction of jammed cars P (v ∈{0 ,1 }) can be decomposed into the three factors (jamming rate, jam lifetime, and jam size) for which we derive, from random walk arguments, exponents that control their scaling close to the critical density.
Uncertainty Quantification using Epi-Splines and Soft Information
2012-06-01
use of the Kullback - Leibler divergence measure. The Kullback - Leibler ...to illustrate the application of soft information related to the Kullback - Leibler (KL) divergence discussed in Chapter 2. The idea behind apply- ing... information for the estimation of system performance density functions in order to quantify uncertainty. We conduct empirical testing of
Star formation in the multiverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bousso, Raphael; Leichenauer, Stefan
2009-03-15
We develop a simple semianalytic model of the star formation rate as a function of time. We estimate the star formation rate for a wide range of values of the cosmological constant, spatial curvature, and primordial density contrast. Our model can predict such parameters in the multiverse, if the underlying theory landscape and the cosmological measure are known.
Boundary Kernel Estimation of the Two Sample Comparison Density Function
1989-05-01
not for the understand- ing, love, and steadfast support of my wife, Catheryn . She supported my move to statistics a mere fortnight after we were...school one learns things of a narrow and technical nature; 0 Catheryn has shown me much of what is fundamentally true and important in this world. To her
A generalized system of models forecasting Central States tree growth.
Stephen R. Shifley
1987-01-01
Describes the development and testing of a system of individual tree-based growth projection models applicable to species in Indiana, Missouri, and Ohio. Annual tree basal area growth is estimated as a function of tree size, crown ratio, stand density, and site index. Models are compatible with the STEMS and TWIGS Projection System.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Self-diffusion in MgO--a density functional study.
Runevall, Odd; Sandberg, Nils
2011-08-31
Density functional theory calculations have been performed to study self-diffusion in magnesium oxide, a model material for a wide range of ionic compounds. Formation energies and entropies of Schottky defects and divacancies were obtained by means of total energy and phonon calculations in supercell configurations. Transition state theory was used to estimate defect migration rates, with migration energies taken from static calculations, and the corresponding frequency factors estimated from the phonon spectrum. In all static calculations we corrected for image effects using either a multipole expansion or an extrapolation to the low concentration limit. It is shown that both methods give similar results. The results for self-diffusion of Mg and O confirm the previously established picture, namely that in materials of nominal purity, Mg diffuses extrinsically by a single vacancy mechanism, while O diffuses intrinsically by a divacancy mechanism. Quantitatively, the current results are in very good agreement with experiments concerning O diffusion, while for Mg the absolute diffusion rate is generally underestimated by a factor of 5-10. The reason for this discrepancy is discussed.
Kim, Hyoungkyu; Hudetz, Anthony G.; Lee, Joseph; Mashour, George A.; Lee, UnCheol; Avidan, Michael S.
2018-01-01
The integrated information theory (IIT) proposes a quantitative measure, denoted as Φ, of the amount of integrated information in a physical system, which is postulated to have an identity relationship with consciousness. IIT predicts that the value of Φ estimated from brain activities represents the level of consciousness across phylogeny and functional states. Practical limitations, such as the explosive computational demands required to estimate Φ for real systems, have hindered its application to the brain and raised questions about the utility of IIT in general. To achieve practical relevance for studying the human brain, it will be beneficial to establish the reliable estimation of Φ from multichannel electroencephalogram (EEG) and define the relationship of Φ to EEG properties conventionally used to define states of consciousness. In this study, we introduce a practical method to estimate Φ from high-density (128-channel) EEG and determine the contribution of each channel to Φ. We examine the correlation of power, frequency, functional connectivity, and modularity of EEG with regional Φ in various states of consciousness as modulated by diverse anesthetics. We find that our approximation of Φ alone is insufficient to discriminate certain states of anesthesia. However, a multi-dimensional parameter space extended by four parameters related to Φ and EEG connectivity is able to differentiate all states of consciousness. The association of Φ with EEG connectivity during clinically defined anesthetic states represents a new practical approach to the application of IIT, which may be used to characterize various physiological (sleep), pharmacological (anesthesia), and pathological (coma) states of consciousness in the human brain. PMID:29503611
Kim, Hyoungkyu; Hudetz, Anthony G; Lee, Joseph; Mashour, George A; Lee, UnCheol
2018-01-01
The integrated information theory (IIT) proposes a quantitative measure, denoted as Φ, of the amount of integrated information in a physical system, which is postulated to have an identity relationship with consciousness. IIT predicts that the value of Φ estimated from brain activities represents the level of consciousness across phylogeny and functional states. Practical limitations, such as the explosive computational demands required to estimate Φ for real systems, have hindered its application to the brain and raised questions about the utility of IIT in general. To achieve practical relevance for studying the human brain, it will be beneficial to establish the reliable estimation of Φ from multichannel electroencephalogram (EEG) and define the relationship of Φ to EEG properties conventionally used to define states of consciousness. In this study, we introduce a practical method to estimate Φ from high-density (128-channel) EEG and determine the contribution of each channel to Φ. We examine the correlation of power, frequency, functional connectivity, and modularity of EEG with regional Φ in various states of consciousness as modulated by diverse anesthetics. We find that our approximation of Φ alone is insufficient to discriminate certain states of anesthesia. However, a multi-dimensional parameter space extended by four parameters related to Φ and EEG connectivity is able to differentiate all states of consciousness. The association of Φ with EEG connectivity during clinically defined anesthetic states represents a new practical approach to the application of IIT, which may be used to characterize various physiological (sleep), pharmacological (anesthesia), and pathological (coma) states of consciousness in the human brain.
Atmospheric densities derived from CHAMP/STAR accelerometer observations
NASA Astrophysics Data System (ADS)
Bruinsma, S.; Tamagnan, D.; Biancale, R.
2004-03-01
The satellite CHAMP carries the accelerometer STAR in its payload and thanks to the GPS and SLR tracking systems accurate orbit positions can be computed. Total atmospheric density values can be retrieved from the STAR measurements, with an absolute uncertainty of 10-15%, under the condition that an accurate radiative force model, satellite macro-model, and STAR instrumental calibration parameters are applied, and that the upper-atmosphere winds are less than 150 m/ s. The STAR calibration parameters (i.e. a bias and a scale factor) of the tangential acceleration were accurately determined using an iterative method, which required the estimation of the gravity field coefficients in several iterations, the first result of which was the EIGEN-1S (Geophys. Res. Lett. 29 (14) (2002) 10.1029) gravity field solution. The procedure to derive atmospheric density values is as follows: (1) a reduced-dynamic CHAMP orbit is computed, the positions of which are used as pseudo-observations, for reference purposes; (2) a dynamic CHAMP orbit is fitted to the pseudo-observations using calibrated STAR measurements, which are saved in a data file containing all necessary information to derive density values; (3) the data file is used to compute density values at each orbit integration step, for which accurate terrestrial coordinates are available. This procedure was applied to 415 days of data over a total period of 21 months, yielding 1.2 million useful observations. The model predictions of DTM-2000 (EGS XXV General Assembly, Nice, France), DTM-94 (J. Geod. 72 (1998) 161) and MSIS-86 (J. Geophys. Res. 92 (1987) 4649) were evaluated by analysing the density ratios (i.e. "observed" to "computed" ratio) globally, and as functions of solar activity, geographical position and season. The global mean of the density ratios showed that the models underestimate density by 10-20%, with an rms of 16-20%. The binning as a function of local time revealed that the diurnal and semi-diurnal components are too strong in the DTM models, while all three models model the latitudinal gradient inaccurately. Using DTM-2000 as a priori, certain model coefficients were re-estimated using the STAR-derived densities, yielding the DTM-STAR test model. The mean and rms of the global density ratios of this preliminary model are 1.00 and 15%, respectively, while the tidal and latitudinal modelling errors become small. This test model is only representative of high solar activity conditions, while the seasonal effect is probably not estimated accurately due to correlation with the solar activity effect. At least one more year of data is required to separate the seasonal effect from the solar activity effect, and data taken under low solar activity conditions must also be assimilated to construct a model representative under all circumstances.
NASA Technical Reports Server (NTRS)
Veselovskii, I.; Whiteman, D. N.; Korenskiy, M.; Kolgotin, A.; Dubovik, O.; Perez-Ramirez, D.; Suvorina, A.
2013-01-01
The results of the application of the linear estimation technique to multiwavelength Raman lidar measurements performed during the summer of 2011 in Greenbelt, MD, USA, are presented. We demonstrate that multiwavelength lidars are capable not only of providing vertical profiles of particle properties but also of revealing the spatio-temporal evolution of aerosol features. The nighttime 3 Beta + 1 alpha lidar measurements on 21 and 22 July were inverted to spatio-temporal distributions of particle microphysical parameters, such as volume, number density, effective radius and the complex refractive index. The particle volume and number density show strong variation during the night, while the effective radius remains approximately constant. The real part of the refractive index demonstrates a slight decreasing tendency in a region of enhanced extinction coefficient. The linear estimation retrievals are stable and provide time series of particle parameters as a function of height at 4 min resolution. AERONET observations are compared with multiwavelength lidar retrievals showing good agreement.
Analytic model to estimate thermonuclear neutron yield in z-pinches using the magnetic Noh problem
NASA Astrophysics Data System (ADS)
Allen, Robert C.
The objective was to build a model which could be used to estimate neutron yield in pulsed z-pinch experiments, benchmark future z-pinch simulation tools and to assist scaling for breakeven systems. To accomplish this, a recent solution to the magnetic Noh problem was utilized which incorporates a self-similar solution with cylindrical symmetry and azimuthal magnetic field (Velikovich, 2012). The self-similar solution provides the conditions needed to calculate the time dependent implosion dynamics from which batch burn is assumed and used to calculate neutron yield. The solution to the model is presented. The ion densities and time scales fix the initial mass and implosion velocity, providing estimates of the experimental results given specific initial conditions. Agreement is shown with experimental data (Coverdale, 2007). A parameter sweep was done to find the neutron yield, implosion velocity and gain for a range of densities and time scales for DD reactions and a curve fit was done to predict the scaling as a function of preshock conditions.
Broekhuis, Femke; Gopalaswamy, Arjun M.
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614
Broekhuis, Femke; Gopalaswamy, Arjun M
2016-01-01
Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.
Investigating the ability of solar coronal shocks to accelerate solar energetic particles
NASA Astrophysics Data System (ADS)
Kwon, R. Y.; Vourlidas, A.
2017-12-01
We estimate the density compression ratio of shocks associated with coronal mass ejections (CMEs) and investigate whether they can accelerate solar energetic particles (SEPs). Using remote-sensing, multi-viewpoint coronagraphic observations, we have developed a method to extract the sheath electron density profiles along the shock normal and estimate the density compression ratio. Our method uses the ellipsoid model to derive the 3D geometry of the sheaths, including the line-of-sight (LOS) depth. The sheath density profiles along the shock normal are modeled with double-Gaussian functions, and the modeled densities are integrated along the LOSs to be compared with the observed brightness in STEREO COR2-Ahead. The upstream densities are derived from either the pB-inversion of the brightness in a pre-event image or an empirical model. We analyze two fast halo CMEs observed on 2011 March 7 and 2014 February 25 that are associated with SEP events detected by multiple spacecraft located over a broad range of heliolongitudes. We find that the density compression peaks around the CME nose and decreases at larger position angles. Interestingly, we find that the supercritical region extends over a large area of the shock and lasts longer (several tens of minutes) than past reports. This finding implies that CME shocks may be capable of accelerating energetic particles in the corona over extended spatial and temporal scales and may, therefore, be responsible for the wide longitudinal distribution of these particles in the inner heliosphere.
Boone-Heinonen, Janne; Diez-Roux, Ana V.; Goff, David C.; Loria, Catherine M.; Kiefe, Catarina I.; Popkin, Barry M.; Gordon-Larsen, Penny
2013-01-01
Background Recent obesity prevention initiatives focus on healthy neighborhood design, but most research examines neighborhood food retail and physical activity (PA) environments in isolation. We estimated joint, interactive, and cumulative impacts of neighborhood food retail and PA environment characteristics on body mass index (BMI) throughout early adulthood. Methods and Findings We used cohort data from the Coronary Artery Risk Development in Young Adults (CARDIA) Study [n=4,092; Year 7 (24-42 years, 1992-1993) followed over 5 exams through Year 25 (2010-2011); 12,921 person-exam observations], with linked time-varying geographic information system-derived neighborhood environment measures. Using regression with fixed effects for individuals, we modeled time-lagged BMI as a function of food and PA resource density (counts per population) and neighborhood development intensity (a composite density score). We controlled for neighborhood poverty, individual-level sociodemographics, and BMI in the prior exam; and included significant interactions between neighborhood measures and by sex. Using model coefficients, we simulated BMI reductions in response to single and combined neighborhood improvements. Simulated increase in supermarket density (from 25th to 75th percentile) predicted inter-exam reduction in BMI of 0.09 kg/m2 [estimate (95% CI): -0.09 (-0.16, -0.02)]. Increasing commercial PA facility density predicted BMI reductions up to 0.22 kg/m2 in men, with variation across other neighborhood features [estimate (95% CI) range: -0.14 (-0.29, 0.01) to -0.22 (-0.37, -0.08)]. Simultaneous increases in supermarket and commercial PA facility density predicted inter-exam BMI reductions up to 0.31 kg/m2 in men [estimate (95% CI) range: -0.23 (-0.39, -0.06) to -0.31 (-0.47, -0.15)] but not women. Reduced fast food restaurant and convenience store density and increased public PA facility density and neighborhood development intensity did not predict reductions in BMI. Conclusions Findings suggest that improvements in neighborhood food retail or PA environments may accumulate to reduce BMI, but some neighborhood changes may be less beneficial to women. PMID:24386458
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse
Estimating Small-Body Gravity Field from Shape Model and Navigation Data
NASA Technical Reports Server (NTRS)
Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam
2008-01-01
This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.
Impact of density information on Rayleigh surface wave inversion results
NASA Astrophysics Data System (ADS)
Ivanov, Julian; Tsoflias, Georgios; Miller, Richard D.; Peterie, Shelby; Morton, Sarah; Xia, Jianghai
2016-12-01
We assessed the impact of density on the estimation of inverted shear-wave velocity (Vs) using the multi-channel analysis of surface waves (MASW) method. We considered the forward modeling theory, evaluated model sensitivity, and tested the effect of density information on the inversion of seismic data acquired in the Arctic. Theoretical review, numerical modeling and inversion of modeled and real data indicated that the density ratios between layers, not the actual density values, impact the determination of surface-wave phase velocities. Application on real data compared surface-wave inversion results using: a) constant density, the most common approach in practice, b) indirect density estimates derived from refraction compressional-wave velocity observations, and c) from direct density measurements in a borehole. The use of indirect density estimates reduced the final shear-wave velocity (Vs) results typically by 6-7% and the use of densities from a borehole reduced the final Vs estimates by 10-11% compared to those from assumed constant density. In addition to the improved absolute Vs accuracy, the resulting overall Vs changes were unevenly distributed laterally when viewed on a 2-D section leading to an overall Vs model structure that was more representative of the subsurface environment. It was observed that the use of constant density instead of increasing density with depth not only can lead to Vs overestimation but it can also create inaccurate model structures, such as a low-velocity layer. Thus, optimal Vs estimations can be best achieved using field estimates of subsurface density ratios.
Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador
Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew
2017-01-01
The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.
Density contrast across the Moho beneath the Indian shield: Implications for isostasy
NASA Astrophysics Data System (ADS)
Paul, Himangshu; Mangalampally, Ravi Kumar; Tiwari, Virendra Mani; Singh, Arun; Chadha, Rajender Kumar; Davuluri, Srinagesh
2018-04-01
Knowledge of isostasy provides insights into how excess (or deficit) of mass on and within the lithosphere is maintained over different time scales, and also helps decipher the vertical dynamics. In continental regions, isostasy is primarily manifested as a crustal root, the extent of which is defined by the lithospheric strength and the density contrast at the Moho. In this study, we briefly review the methodology for extracting the density contrast across the Moho using the amplitudes of the P-to-s converted and free-surface reverberating phases in a receiver function (RF). We test the efficacy of this technique by applying it on synthetic and real data from 10 broadband seismic stations sited on diverse tectonic provinces in the Indian shield. We determine the density contrast after parameterizing the shear-wave velocity structure beneath the stations using the nearest neighbourhood algorithm. We find considerable variation in the density contrast across the Moho beneath the stations (0.4-0.65 gm/cc). This is explained in terms of isostatic compensation, incorporating the existing estimates of lithospheric strength (Te). Crustal roots computed using the estimated Te and the deduced density contrast substantiate the crustal thickness values inferred through RF analysis, and vice versa. This illustrates isostasy as a combination of variation in density contrast and Te. The density contrasts and crustal thicknesses inferred from RF analysis explain well the isostatic compensation mechanism in different regions. However, unusually large density contrasts (∼0.6 gm/cc) corresponding to elevated regions are intriguing and warrant further investigations. Our observation of varied density contrasts at the Moho in a Precambrian continental setting is interesting and raises a question about the existence of such situations in other parts of the world.
Exact hierarchical clustering in one dimension. [in universe
NASA Technical Reports Server (NTRS)
Williams, B. G.; Heavens, A. F.; Peacock, J. A.; Shandarin, S. F.
1991-01-01
The present adhesion model-based one-dimensional simulations of gravitational clustering have yielded bound-object catalogs applicable in tests of analytical approaches to cosmological structure formation. Attention is given to Press-Schechter (1974) type functions, as well as to their density peak-theory modifications and the two-point correlation function estimated from peak theory. The extent to which individual collapsed-object locations can be predicted by linear theory is significant only for objects of near-characteristic nonlinear mass.
Evolution of Metastable Defects and Its Effect on the Electronic Properties of MoS2 Films.
Precner, M; Polaković, T; Qiao, Qiao; Trainer, D J; Putilov, A V; Di Giorgio, C; Cone, I; Zhu, Y; Xi, X X; Iavarone, M; Karapetrov, G
2018-04-30
We report on structural and electronic properties of defects in chemical vapor-deposited monolayer and few-layer MoS 2 films. Scanning tunneling microscopy, Kelvin probe force microscopy, and transmission electron microscopy were used to obtain high resolution images and quantitative measurements of the local density of states, work function and nature of defects in MoS 2 films. We track the evolution of defects that are formed under heating and electron beam irradiation. We observe formation of metastable domains with different work function values after annealing the material in ultra-high vacuum to moderate temperatures. We attribute these metastable values of the work function to evolution of crystal defects forming during the annealing. The experiments show that sulfur vacancies formed after exposure to elevated temperatures diffuse, coalesce, and migrate bringing the system from a metastable to equilibrium ground state. The process could be thermally or e-beam activated with estimated energy barrier for sulfur vacancy migration of 0.6 eV in single unit cell MoS 2 . Even at equilibrium conditions, the work function and local density of states values are strongly affected near grain boundaries and edges. The results provide initial estimates of the thermal budgets available for reliable fabrication of MoS 2 -based integrated electronics and indicate the importance of defect control and layer passivation.
Evolution of Metastable Defects and Its Effect on the Electronic Properties of MoS 2 Films
Precner, Marian; Polakovic, T.; Qiao, Qiao; ...
2018-04-30
Here, we report on structural and electronic properties of defects in chemical vapor-deposited monolayer and few-layer MoS 2 films. Scanning tunneling microscopy, Kelvin probe force microscopy, and transmission electron microscopy were used to obtain high resolution images and quantitative measurements of the local density of states, work function and nature of defects in MoS 2 films. We track the evolution of defects that are formed under heating and electron beam irradiation. We observe formation of metastable domains with different work function values after annealing the material in ultra-high vacuum to moderate temperatures. We attribute these metastable values of the workmore » function to evolution of crystal defects forming during the annealing. The experiments show that sulfur vacancies formed after exposure to elevated temperatures diffuse, coalesce, and migrate bringing the system from a metastable to equilibrium ground state. The process could be thermally or e-beam activated with estimated energy barrier for sulfur vacancy migration of 0.6 eV in single unit cell MoS 2. Even at equilibrium conditions, the work function and local density of states values are strongly affected near grain boundaries and edges. The results provide initial estimates of the thermal budgets available for reliable fabrication of MoS 2-based integrated electronics and indicate the importance of defect control and layer passivation.« less
Evolution of Metastable Defects and Its Effect on the Electronic Properties of MoS 2 Films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Precner, Marian; Polakovic, T.; Qiao, Qiao
Here, we report on structural and electronic properties of defects in chemical vapor-deposited monolayer and few-layer MoS 2 films. Scanning tunneling microscopy, Kelvin probe force microscopy, and transmission electron microscopy were used to obtain high resolution images and quantitative measurements of the local density of states, work function and nature of defects in MoS 2 films. We track the evolution of defects that are formed under heating and electron beam irradiation. We observe formation of metastable domains with different work function values after annealing the material in ultra-high vacuum to moderate temperatures. We attribute these metastable values of the workmore » function to evolution of crystal defects forming during the annealing. The experiments show that sulfur vacancies formed after exposure to elevated temperatures diffuse, coalesce, and migrate bringing the system from a metastable to equilibrium ground state. The process could be thermally or e-beam activated with estimated energy barrier for sulfur vacancy migration of 0.6 eV in single unit cell MoS 2. Even at equilibrium conditions, the work function and local density of states values are strongly affected near grain boundaries and edges. The results provide initial estimates of the thermal budgets available for reliable fabrication of MoS 2-based integrated electronics and indicate the importance of defect control and layer passivation.« less
Cawthon, Peggy Mannen; Fox, Kathleen M; Gandra, Shravanthi R; Delmonico, Matthew J; Chiou, Chiun-Fang; Anthony, Mary S; Sewall, Ase; Goodpaster, Bret; Satterfield, Suzanne; Cummings, Steven R; Harris, Tamara B
2009-08-01
To examine the association between strength, function, lean mass, muscle density, and risk of hospitalization. Prospective cohort study. Two U.S. clinical centers. Adults aged 70 to 80 (N=3,011) from the Health, Aging and Body Composition Study. Measurements were of grip strength, knee extension strength, lean mass, walking speed, and chair stand pace. Thigh computed tomography scans assessed muscle area and density (a proxy for muscle fat infiltration). Hospitalizations were confirmed by local review of medical records. Negative binomial regression models estimated incident rate ratios (IRRs) of hospitalization for race- and sex-specific quartiles of each muscle and function parameter separately. Multivariate models adjusted for age, body mass index, health status, and coexisting medical conditions. During an average 4.7 years of follow-up, 1,678 (55.7%) participants experienced one or more hospitalizations. Participants in the lowest quartile of muscle density were more likely to be subsequently hospitalized (multivariate IRR=1.47, 95% confidence interval (CI)=1.24-1.73) than those in the highest quartile. Similarly, participants with the weakest grip strength were at greater risk of hospitalization (multivariate IRR=1.52, 95% CI=1.30-1.78, Q1 vs. Q4). Comparable results were seen for knee strength, walking pace, and chair stands pace. Lean mass and muscle area were not associated with risk of hospitalization. Weak strength, poor function, and low muscle density, but not muscle size or lean mass, were associated with greater risk of hospitalization. Interventions to reduce the disease burden associated with sarcopenia should focus on increasing muscle strength and improving physical function rather than simply increasing lean mass.
Automatic detection and quantitative analysis of cells in the mouse primary motor cortex
NASA Astrophysics Data System (ADS)
Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui
2014-09-01
Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.
Ecologically relevant levels of multiple, common marine stressors suggest antagonistic effects.
Lange, Rolanda; Marshall, Dustin
2017-07-24
Stressors associated with global change will be experienced simultaneously and may act synergistically, so attempts to estimate the capacity of marine systems to cope with global change requires a multi-stressor approach. Because recent evidence suggests that stressor effects can be context-dependent, estimates of how stressors are experienced in ecologically realistic settings will be particularly valuable. To enhance our understanding of the interplay between environmental effects and the impact of multiple stressors from both natural and anthropogenic sources, we conducted a field experiment. We explored the impact of multiple, functionally varied stressors from both natural and anthropogenic sources experienced during early life history in a common sessile marine invertebrate, Bugula neritina. Natural spatial environmental variation induced differences in conspecific densities, allowing us to test for density-driven context-dependence of stressor effects. We indeed found density-dependent effects. Under high conspecific density, individual survival increased, which offset part of the negative effects of experiencing stressors. Experiencing multiple stressors early in life history translated to a decreased survival in the field, albeit the effects were not as drastic as we expected: our results are congruent with antagonistic stressor effects. We speculate that when individual stressors are more subtle, stressor synergies become less common.
Levandowski, William Brower; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.
2015-01-01
We test this algorithm on the Proterozoic Midcontinent Rift (MCR), north-central U.S. The MCR provides a challenge because it hosts a gravity high overlying low shear-wave velocity crust in a generally flat region. Our initial density estimates are derived from a seismic velocity/crustal thickness model based on joint inversion of surface-wave dispersion and receiver functions. By adjusting these estimates to reproduce gravity and topography, we generate a lithospheric-scale model that reveals dense middle crust and eclogitized lowermost crust within the rift. Mantle lithospheric density beneath the MCR is not anomalous, consistent with geochemical evidence that lithospheric mantle was not the primary source of rift-related magmas and suggesting that extension occurred in response to far-field stress rather than a hot mantle plume. Similarly, the subsequent inversion of normal faults resulted from changing far-field stress that exploited not only warm, recently faulted crust but also a gravitational potential energy low in the MCR. The success of this density modeling algorithm in the face of such apparently contradictory geophysical properties suggests that it may be applicable to a variety of tectonic and geodynamic problems.
Liu, Quanying; Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2018-01-01
Resting state networks (RSNs) in the human brain were recently detected using high-density electroencephalography (hdEEG). This was done by using an advanced analysis workflow to estimate neural signals in the cortex and to assess functional connectivity (FC) between distant cortical regions. FC analyses were conducted either using temporal (tICA) or spatial independent component analysis (sICA). Notably, EEG-RSNs obtained with sICA were very similar to RSNs retrieved with sICA from functional magnetic resonance imaging data. It still remains to be clarified, however, what technological aspects of hdEEG acquisition and analysis primarily influence this correspondence. Here we examined to what extent the detection of EEG-RSN maps by sICA depends on the electrode density, the accuracy of the head model, and the source localization algorithm employed. Our analyses revealed that the collection of EEG data using a high-density montage is crucial for RSN detection by sICA, but also the use of appropriate methods for head modeling and source localization have a substantial effect on RSN reconstruction. Overall, our results confirm the potential of hdEEG for mapping the functional architecture of the human brain, and highlight at the same time the interplay between acquisition technology and innovative solutions in data analysis. PMID:29551969
Disrupted resting brain graph measures in individuals at high risk for alcoholism.
Holla, Bharath; Panda, Rajanikant; Venkatasubramanian, Ganesan; Biswal, Bharat; Bharath, Rose Dawn; Benegal, Vivek
2017-07-30
Familial susceptibility to alcoholism is likely to be linked to the externalizing diathesis seen in high-risk offspring from high-density alcohol use disorder (AUD) families. The present study aimed at comparing resting brain functional connectivity and their association with externalizing symptoms and alcoholism familial density in 40 substance-naive high-risk (HR) male offspring from high-density AUD families and 30 matched healthy low-risk (LR) males without a family history of substance dependence using graph theory-based network analysis. The HR subjects from high-density AUD families compared with LR, showed significantly reduced clustering, small-worldness, and local network efficiency. The frontoparietal, cingulo-opercular, sensorimotor and cerebellar networks exhibited significantly reduced functional segregation. These disruptions exhibited independent incremental value in predicting the externalizing symptoms over and above the demographic variables. The reduction of functional segregation in HR subjects was significant across both the younger and older age groups and was proportional to the family loading of AUDs. Detection and estimation of these developmentally relevant disruptions in small-world architecture at critical brain regions sub-serving cognitive, affective, and sensorimotor processes are vital for understanding the familial risk for early onset alcoholism as well as for understanding the pathophysiological mechanism of externalizing behaviors. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Physical and optical property studies on Bi3+ ion containing vanadium sodium borate glasses
NASA Astrophysics Data System (ADS)
Venkatesh, G.; Meera, B. N.; Eraiah, B.
2018-04-01
xBi2O3-(15-x)V2O5-45B2O3-40Na2O glasses have been prepared using melt quenching technique. Amorphous nature of the glasses is verified using powder XRD. Densities and molar volume have been determined as a function of bismuth content and interestingly both increases as a function of bismuth content. Further oxygen packing density (OPD) is found to decrease with bismuth content. The increase in the molar volume as a function of bismuth content may be due to structural changes in the glass network. The optical properties performed from the optical absorption spectra were recorded in the wavelength range 200-1100 nm using UV-Visible spectrophotometer. The theoretical optical basicity of the oxides have also been estimated. The calculated energy band gap values increases with increase in Bi2O3 content.
NASA Astrophysics Data System (ADS)
Xie, Gui-long; Zhang, Yong-hong; Huang, Shi-ping
2012-04-01
Using coarse-grained molecular dynamics simulations based on Gay-Berne potential model, we have simulated the cooling process of liquid n-butanol. A new set of GB parameters are obtained by fitting the results of density functional theory calculations. The simulations are carried out in the range of 290-50 K with temperature decrements of 10 K. The cooling characteristics are determined on the basis of the variations of the density, the potential energy and orientational order parameter with temperature, whose slopes all show discontinuity. Both the radial distribution function curves and the second-rank orientational correlation function curves exhibit splitting in the second peak. Using the discontinuous change of these thermodynamic and structure properties, we obtain the glass transition at an estimate of temperature Tg=120±10 K, which is in good agreement with experimental results 110±1 K.
Elastomer Reinforced with Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Hudson, Jared L.; Krishnamoorti, Ramanan
2009-01-01
Elastomers are reinforced with functionalized, single-walled carbon nanotubes (SWNTs) giving them high-breaking strain levels and low densities. Cross-linked elastomers are prepared using amine-terminated, poly(dimethylsiloxane) (PDMS), with an average molecular weight of 5,000 daltons, and a functionalized SWNT. Cross-link densities, estimated on the basis of swelling data in toluene (a dispersing solvent) indicated that the polymer underwent cross-linking at the ends of the chains. This thermally initiated cross-linking was found to occur only in the presence of the aryl alcohol functionalized SWNTs. The cross-link could have been via a hydrogen-bonding mechanism between the amine and the free hydroxyl group, or via attack of the amine on the ester linage to form an amide. Tensile properties examined at room temperature indicate a three-fold increase in the tensile modulus of the elastomer, with rupture and failure of the elastomer occurring at a strain of 6.5.
Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel
2014-06-05
Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.
Effect of van der Waals interactions on the structural and binding properties of GaSe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkisov, Sergey Y., E-mail: sarkisov@mail.tsu.ru; Kosobutsky, Alexey V., E-mail: kosobutsky@kemsu.ru; Kemerovo State University, Krasnaya 6, 650043 Kemerovo
The influence of van der Waals interactions on the lattice parameters, band structure, elastic moduli and binding energy of layered GaSe compound has been studied using projector-augmented wave method within density functional theory. We employed the conventional local/semilocal exchange-correlation functionals and recently developed van der Waals functionals which are able to describe dispersion forces. It is found that application of van der Waals density functionals allows to substantially increase the accuracy of calculations of the lattice constants a and c and interlayer distance in GaSe at ambient conditions and under hydrostatic pressure. The pressure dependences of the a-parameter, Ga–Ga, Ga–Semore » bond lengths and Ga–Ga–Se bond angle are characterized by a relatively low curvature, while c(p) has a distinct downward bowing due to nonlinear shrinking of the interlayer spacing. From the calculated binding energy curves we deduce the interlayer binding energy of GaSe, which is found to be in the range 0.172–0.197 eV/layer (14.2–16.2 meV/Å{sup 2}). - Highlights: • Effects of van der Waals interactions are analyzed using advanced density functionals. • Calculations with vdW-corrected functionals closely agree with experiment. • Interlayer binding energy of GaSe is estimated to be 14.2–16.2 meV/Å{sup 2}.« less
Lee, Kyungtae; Gu, Geun Ho; Mullen, Charles A; Boateng, Akwasi A; Vlachos, Dionisios G
2015-01-01
Density functional theory is used to study the adsorption of guaiacol and its initial hydrodeoxygenation (HDO) reactions on Pt(111). Previous Brønsted-Evans-Polanyi (BEP) correlations for small open-chain molecules are inadequate in estimating the reaction barriers of phenolic compounds except for the side group (methoxy) carbon-dehydrogenation. New BEP relations are established using a select group of phenolic compounds. These relations are applied to construct a potential-energy surface of guaiacol-HDO to catechol. Analysis shows that catechol is mainly produced via dehydrogenation of the methoxy functional group followed by the CHx (x<3) removal of the functional group and hydrogenation of the ring carbon, in contrast to a hypothesis of a direct demethylation path. Dehydroxylation and demethoxylation are slow, implying that phenol is likely produced from catechol but not through its direct dehydroxylation followed by aromatic carbon-ring hydrogenation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Temporal variation in bird counts within a Hawaiian rainforest
Simon, John C.; Pratt, T.K.; Berlin, Kim E.; Kowalsky, James R.; Fancy, S.G.; Hatfield, J.S.
2002-01-01
We studied monthly and annual variation in density estimates of nine forest bird species along an elevational gradient in an east Maui rainforest. We conducted monthly variable circular-plot counts for 36 consecutive months along transects running downhill from timberline. Density estimates were compared by month, year, and station for all resident bird species with sizeable populations, including four native nectarivores, two native insectivores, a non-native insectivore, and two non-native generalists. We compared densities among three elevational strata and between breeding and nonbreeding seasons. All species showed significant differences in density estimates among months and years. Three native nectarivores had higher density estimates within their breeding season (December-May) and showed decreases during periods of low nectar production following the breeding season. All insectivore and generalist species except one had higher density estimates within their March-August breeding season. Density estimates also varied with elevation for all species, and for four species a seasonal shift in population was indicated. Our data show that the best time to conduct counts for native forest birds on Maui is January-February, when birds are breeding or preparing to breed, counts are typically high, variability in density estimates is low, and the likelihood for fair weather is best. Temporal variations in density estimates documented in our study site emphasize the need for consistent, well-researched survey regimens and for caution when drawing conclusions from, or basing management decisions on, survey data.
Curtis L. VanderSchaaf; Harold E. Burkhart
2010-01-01
Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...
Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys
Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik
2011-01-01
The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...
Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data
NASA Technical Reports Server (NTRS)
Sree, David
1992-01-01
Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.
Evaluation of line transect sampling based on remotely sensed data from underwater video
Bergstedt, R.A.; Anderson, D.R.
1990-01-01
We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.
Toward accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2017-08-01
Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and policy decisions. © 2016 Society for Conservation Biology.
Performance limitations of a white light extrinsic Fabry-Perot interferometric displacement sensor
NASA Astrophysics Data System (ADS)
Moro, Erik A.; Todd, Michael D.; Puckett, Anthony D.
2012-06-01
Non-contacting interferometric fiber optic sensors offer a minimally invasive, high-accuracy means of measuring a structure's kinematic response to loading. The performance of interferometric sensors is often dictated by the technique employed for demodulating the kinematic measurand of interest from phase in the observed optical signal. In this paper a white-light extrinsic Fabry-Perot interferometer is implemented, offering robust displacement sensing performance. Displacement data is extracted from an estimate of the power spectral density, calculated from the interferometer's received optical power measured as a function of optical transmission frequency, and the sensor's performance is dictated by the details surrounding the implementation of this power spectral density estimation. One advantage of this particular type of interferometric sensor is that many of its control parameters (e.g., frequency range, frequency sampling density, sampling rate, etc.) may be chosen to so that the sensor satisfies application-specific performance needs in metrics such as bandwidth, axial displacement range, displacement resolution, and accuracy. A suite of user-controlled input values is investigated for estimating the spectrum of power versus wavelength data, and the relationships between performance metrics and input parameters are described in an effort to characterize the sensor's operational performance limitations. This work has been approved by Los Alamos National Laboratory for unlimited public release (LA-UR 12-01512).
Bandura, Andrei V; Kubicki, James D; Sofo, Jorge O
2008-09-18
Mono- and bilayer adsorption of H2O molecules on TiO2 and SnO 2 (110) surfaces has been investigated using static planewave density functional theory (PW DFT) simulations. Potential energies and structures were calculated for the associative, mixed, and dissociative adsorption states. The DOS of the bare and hydrated surfaces has been used for the analysis of the difference between the H2O interaction with TiO2 and SnO 2 surfaces. The important role of the bridging oxygen in the H2O dissociation process is discussed. The influence of the second layer of H2O molecules on relaxation of the surface atoms was estimated.
Morales, Miguel A.; Pierleoni, Carlo; Schwegler, Eric; Ceperley, D. M.
2010-01-01
Using quantum simulation techniques based on either density functional theory or quantum Monte Carlo, we find clear evidence of a first-order transition in liquid hydrogen, between a low conductivity molecular state and a high conductivity atomic state. Using the temperature dependence of the discontinuity in the electronic conductivity, we estimate the critical point of the transition at temperatures near 2,000 K and pressures near 120 GPa. Furthermore, we have determined the melting curve of molecular hydrogen up to pressures of 200 GPa, finding a reentrant melting line. The melting line crosses the metalization line at 700 K and 220 GPa using density functional energetics and at 550 K and 290 GPa using quantum Monte Carlo energetics. PMID:20566888
Efficient and robust computation of PDF features from diffusion MR signal.
Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc
2009-10-01
We present a method for the estimation of various features of the tissue micro-architecture using the diffusion magnetic resonance imaging. The considered features are designed from the displacement probability density function (PDF). The estimation is based on two steps: first the approximation of the signal by a series expansion made of Gaussian-Laguerre and Spherical Harmonics functions; followed by a projection on a finite dimensional space. Besides, we propose to tackle the problem of the robustness to Rician noise corrupting in-vivo acquisitions. Our feature estimation is expressed as a variational minimization process leading to a variational framework which is robust to noise. This approach is very flexible regarding the number of samples and enables the computation of a large set of various features of the local tissues structure. We demonstrate the effectiveness of the method with results on both synthetic phantom and real MR datasets acquired in a clinical time-frame.
Towards an exact correlated orbital theory for electrons
NASA Astrophysics Data System (ADS)
Bartlett, Rodney J.
2009-12-01
The formal and computational attraction of effective one-particle theories like Hartree-Fock and density functional theory raise the question of how far such approaches can be taken to offer exact results for selected properties of electrons in atoms, molecules, and solids. Some properties can be exactly described within an effective one-particle theory, like principal ionization potentials and electron affinities. This fact can be used to develop equations for a correlated orbital theory (COT) that guarantees a correct one-particle energy spectrum. They are built upon a coupled-cluster based frequency independent self-energy operator presented here, which distinguishes the approach from Dyson theory. The COT also offers an alternative to Kohn-Sham density functional theory (DFT), whose objective is to represent the electronic density exactly as a single determinant, while paying less attention to the energy spectrum. For any estimate of two-electron terms COT offers a litmus test of its accuracy for principal Ip's and Ea's. This feature for approximating the COT equations is illustrated numerically.
Poudel, Lokendra; Wen, Amy M; French, Roger H; Parsegian, V Adrian; Podgornik, Rudolf; Steinmetz, Nicole F; Ching, Wai-Yim
2015-05-18
The electronic structure and partial charge of doxorubicin (DOX) in three different molecular environments-isolated, solvated, and intercalated in a DNA complex-are studied by first-principles density functional methods. It is shown that the addition of solvating water molecules to DOX, together with the proximity to and interaction with DNA, has a significant impact on the electronic structure as well as on the partial charge distribution. Significant improvement in estimating the DOX-DNA interaction energy is achieved. The results are further elucidated by resolving the total density of states and surface charge density into different functional groups. It is concluded that the presence of the solvent and the details of the interaction geometry matter greatly in determining the stability of DOX complexation. Ab initio calculations on realistic models are an important step toward a more accurate description of the long-range interactions in biomolecular systems. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Makhov, D. V.; Lewis, Laurent J.
2005-05-01
The positron lifetimes for various vacancy clusters in silicon are calculated within the framework of the two-component electron-positron density functional theory. The effect of the trapped positron on the electron density and on the relaxation of the structure is investigated. Our calculations show that, contrary to the usual assumption, the positron-induced forces do not compensate in general for electronic inward forces. Thus, geometry optimization is required in order to determine positron lifetime accurately. For the monovacancy and the divacancy, the results of our calculations are in good agreement with the experimental positron lifetimes, suggesting that this approach gives good estimates of positron lifetimes for larger vacancy clusters, required for their correct identification with positron annihilation spectroscopy. As an application, our calculations show that fourfold trivacancies and symmetric fourfold tetravacancies have positron lifetimes similar to monovacancies and divacancies, respectively, and can thus be confused in the interpretation of positron annihilation experiments.
Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble
2016-06-17
Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.
Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less
NASA Astrophysics Data System (ADS)
D'Isanto, A.; Polsterer, K. L.
2018-01-01
Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.
Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?
NASA Astrophysics Data System (ADS)
Jalili, Mahdi
2016-07-01
The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.
IRT-LR-DIF with Estimation of the Focal-Group Density as an Empirical Histogram
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Item response theory-likelihood ratio-differential item functioning (IRT-LR-DIF) is used to evaluate the degree to which items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Usually, the latent distribution is presumed normal for both…
Operations Research techniques in the management of large-scale reforestation programs
Joseph Buongiorno; D.E. Teeguarden
1978-01-01
A reforestation planning system for the Douglas-fir region of the Western United States is described. Part of the system is a simulation model to predict plantation growth and to determine economic thinning regimes and rotation ages as a function of site characteristics, initial density, reforestation costs, and management constraints. A second model estimates the...
Accounting for variation in root wood density and percent carbon in belowground carbon estimates
Brandon H. Namm; John-Pascal Berrill
2012-01-01
Little is known about belowground biomass and carbon in tanoak. Although tanoaks rarely provide merchantable wood, an assessment of belowground carbon loss due to tanoak removal and Sudden Oak Death will inform conservation and management decisions in redwood-tanoak ecosystems.The carbon content of woody biomass is a function of...
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
1981-07-01
Samejima, RR-79-1), suggests that it will be more fruitful to observe the square root of an information function, rather than the information...II44 t&4 ~4J44 AJ.ISN.a -64- 0I 44 0- -J- .00 c;i 0* 0 cIJ II Ys c0 r.M A.LISN30 -65- IV-8 the estimated density functions, g*(r*) , will affect the...Yukihiro NoguchiFaculty of Education Department of Psychology University of Tokyo Elliot Hall Bongo , Bumkyoku 75 East River Road Tokyo, Japan ŕ
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
NASA Astrophysics Data System (ADS)
Delage, Pierre; Karakostas, Foivos; Dhemaied, Amine; Belmokhtar, Malik; Lognonné, Philippe; Golombek, Matt; De Laure, Emmanuel; Hurst, Ken; Dupla, Jean-Claude; Kedar, Sharon; Cui, Yu Jun; Banerdt, Bruce
2017-10-01
In support of the InSight mission in which two instruments (the SEIS seismometer and the HP3 heat flow probe) will interact directly with the regolith on the surface of Mars, a series of mechanical tests were conducted on three different regolith simulants to better understand the observations of the physical and mechanical parameters that will be derived from InSight. The mechanical data obtained were also compared to data on terrestrial sands. The density of the regolith strongly influences its mechanical properties, as determined from the data on terrestrial sands. The elastoplastic compression volume changes were investigated through oedometer tests that also provided estimates of possible changes in density with depth. The results of direct shear tests provided values of friction angles that were compared with that of a terrestrial sand, and an extrapolation to lower density provided a friction angle compatible with that estimated from previous observations on the surface of Mars. The importance of the contracting/dilating shear volume changes of sands on the dynamic penetration of the mole was determined, with penetration facilitated by the ˜1.3 Mg/m3 density estimated at the landing site. Seismic velocities, measured by means of piezoelectric bender elements in triaxial specimens submitted to various isotropic confining stresses, show the importance of the confining stress, with lesser influence of density changes under compression. A power law relation of velocity as a function of confining stress with an exponent of 0.3 was identified from the tests, allowing an estimate of the surface seismic velocity of 150 m/s. The effect on the seismic velocity of a 10% proportion of rock in the regolith was also studied. These data will be compared with in situ data measured by InSight after landing.
Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris
2010-01-01
The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375
Improving chemical species tomography of turbulent flows using covariance estimation.
Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J
2017-05-01
Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.
Mapping Variation in Vegetation Functioning with Imaging Spectroscopy
NASA Astrophysics Data System (ADS)
Townsend, P. A.; Couture, J. J.; Kruger, E. L.; Serbin, S.; Singh, A.
2015-12-01
Imaging spectroscopy (otherwise known as hyperspectral remote sensing) offers the potential to characterize the spatial and temporal variation in biophysical and biochemical properties of vegetation that can be costly or logistically difficult to measure comprehensively using traditional methods. A number of recent studies have illustrated the capacity for imaging spectroscopy data, such as from NASA's AVIRIS sensor, to empirically estimate functional traits related to foliar chemistry and physiology (Singh et al. 2015, Serbin et al. 2015). Here, we present analyses that illustrate the implications of those studies to characterize within-field or -stand variability in ecosystem functioning. In agricultural ecosystems, within-field photosynthetic capacity can vary by 30-50%, likely due to within-field variations in water availability and soil fertility. In general, the variability of foliar traits is lower in forests than agriculture, but can still be significant. Finally, we demonstrate that functional trait variability at the stand scale is strongly related to vegetation diversity. These results have two significant implications: 1) reliance on a small number of field samples to broadly estimate functional traits likely underestimates variability in those traits, and 2) if trait estimations from imaging spectroscopy are reliable, such data offer the opportunity to greatly increase the density of measurements we can use to predict ecosystem function.
Robinson, Hugh S.; Ruth, Toni K.; Gude, Justin A.; Choate, David; DeSimone, Rich; Hebblewhite, Mark; Matchett, Marc R.; Mitchell, Michael S.; Murphy, Kerry; Williams, Jim
2015-01-01
To be most effective, the scale of wildlife management practices should match the range of a particular species’ movements. For this reason, combined with our inability to rigorously or regularly census mountain lion populations, several authors have suggested that mountain lions be managed in a source-sink or metapopulation framework. We used a combination of resource selection functions, mortality estimation, and dispersal modeling to estimate cougar population levels in Montana statewide and potential population level effects of planned harvest levels. Between 1980 and 2012, 236 independent mountain lions were collared and monitored for research in Montana. From these data we used 18,695 GPS locations collected during winter from 85 animals to develop a resource selection function (RSF), and 11,726 VHF and GPS locations from 142 animals along with the locations of 6343 mountain lions harvested from 1988–2011 to validate the RSF model. Our RSF model validated well in all portions of the State, although it appeared to perform better in Montana Fish, Wildlife and Parks (MFWP) Regions 1, 2, 4 and 6, than in Regions 3, 5, and 7. Our mean RSF based population estimate for the total population (kittens, juveniles, and adults) of mountain lions in Montana in 2005 was 3926, with almost 25% of the entire population in MFWP Region 1. Estimates based on a high and low reference population estimates produce a possible range of 2784 to 5156 mountain lions statewide. Based on a range of possible survival rates we estimated the mountain lion population in Montana to be stable to slightly increasing between 2005 and 2010 with lambda ranging from 0.999 (SD = 0.05) to 1.02 (SD = 0.03). We believe these population growth rates to be a conservative estimate of true population growth. Our model suggests that proposed changes to female harvest quotas for 2013–2015 will result in an annual statewide population decline of 3% and shows that, due to reduced dispersal, changes to harvest in one management unit may affect population growth in neighboring units where smaller or even no changes were made. Uncertainty regarding dispersal levels and initial population density may have a significant effect on predictions at a management unit scale (i.e. 2000 km2), while at a regional scale (i.e. 50,000 km2) large differences in initial population density result in relatively small changes in population growth rate, and uncertainty about dispersal may not be as influential. Doubling the presumed initial density from a low estimation of 2.19 total animals per 100 km2 resulted in a difference in annual population growth rate of only 2.6% statewide when compared to high density of 4.04 total animals per 100 km2 (low initial population estimate λ = 0.99, while high initial population estimate λ = 1.03). We suggest modeling tools such as this may be useful in harvest planning at a regional and statewide level.
Enhancement of the Triple Alpha Rate in a Hot Dense Medium
NASA Astrophysics Data System (ADS)
Beard, Mary; Austin, Sam M.; Cyburt, Richard
2017-09-01
In a sufficiently hot and dense astrophysical environment the rate of the triple-alpha (3 α ) reaction can increase greatly over the value appropriate for helium burning stars owing to hadronically induced deexcitation of the Hoyle state. In this Letter we use a statistical model to evaluate the enhancement as a function of temperature and density. For a density of 106 g cm-3 enhancements can exceed a factor of 100. In high temperature or density situations, the enhanced 3 α rate is a better estimate of this rate and should be used in these circumstances. We then examine the effect of these enhancements on production of 12C in the neutrino wind following a supernova explosion and in an x-ray burster.
Density-dependent recruitment of the bloater (Coregonus hoyi) in Lake Michigan
Brown, Edward H.; Eck, Gary W.
1992-01-01
Density-dependent recruitment of the bloater (Coregonus hoyi) in Lake Michigan during and after recovery of the population in about 1977-1983 was best reflected in the fit of the Beverton-Holt recruitment function to age -1 and -2 recruits and estimated eggs of parents surveyed with trawls. A lower growth rate and lower lipid content of bloaters at higher population densities and no evidence of cannibalism supported the conclusion that recruitment is resource limited when alewife (Alosa pseudoharengus) abundance is low. Predation on larvae by alewives was indicated in earlier studies as the probable cause of depressed recruitment of bloaters before their recovery, which coincided with declining alewife abundance. This negative interaction masked any bloater stock-recruitment relation in the earlier period.
Cool Core Bias in Sunyaev-Zel’dovich Galaxy Cluster Surveys
Lin, Henry W.; McDonald, Michael; Benson, Bradford; ...
2015-03-18
Sunyaev-Zeldovich (SZ) surveys find massive clusters of galaxies by measuring the inverse Compton scattering of cosmic microwave background off of intra-cluster gas. The cluster selection function from such surveys is expected to be nearly independent of redshift and cluster astrophysics. In this work, we estimate the effect on the observed SZ signal of centrally-peaked gas density profiles (cool cores) and radio emission from the brightest cluster galaxy (BCG) by creating mock observations of a sample of clusters that span the observed range of classical cooling rates and radio luminosities. For each cluster, we make simulated SZ observations by the Southmore » Pole Telescope and characterize the cluster selection function, but note that our results are broadly applicable to other SZ surveys. We find that the inclusion of a cool core can cause a change in the measured SPT significance of a cluster between 0.01%–10% at z > 0.3, increasing with cuspiness of the cool core and angular size on the sky of the cluster (i.e., decreasing redshift, increasing mass). We provide quantitative estimates of the bias in the SZ signal as a function of a gas density cuspiness parameter, redshift, mass, and the 1.4 GHz radio luminosity of the central AGN. Based on this work, we estimate that, for the Phoenix cluster (one of the strongest cool cores known), the presence of a cool core is biasing the SZ significance high by ~6%. The ubiquity of radio galaxies at the centers of cool core clusters will offset the cool core bias to varying degrees« less
Evaluation of trapping-web designs
Lukacs, P.M.; Anderson, D.R.; Burnham, K.P.
2005-01-01
The trapping web is a method for estimating the density and abundance of animal populations. A Monte Carlo simulation study is performed to explore performance of the trapping web for estimating animal density under a variety of web designs and animal behaviours. The trapping performs well when animals have home ranges, even if the home ranges are large relative to trap spacing. Webs should contain at least 90 traps. Trapping should continue for 5-7 occasions. Movement rates have little impact on density estimates when animals are confined to home ranges. Estimation is poor when animals do not have home ranges and movement rates are rapid. The trapping web is useful for estimating the density of animals that are hard to detect and occur at potentially low densities. ?? CSIRO 2005.
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
Flux density calibration in diffuse optical tomographic systems.
Biswas, Samir Kumar; Rajan, Kanhirodan; Vasu, Ram M
2013-02-01
The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Φ) at the nodal points of the mesh. The experimentally measured flux (Umeasured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Φ) from Umeasuredcal. In the first approach, the measurement data with a homogeneous phantom (Umeasuredhomo) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (Umeasuredhetero) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Second feature of the matter two-point function
NASA Astrophysics Data System (ADS)
Tansella, Vittorio
2018-05-01
We point out the existence of a second feature in the matter two-point function, besides the acoustic peak, due to the baryon-baryon correlation in the early Universe and positioned at twice the distance of the peak. We discuss how the existence of this feature is implied by the well-known heuristic argument that explains the baryon bump in the correlation function. A standard χ2 analysis to estimate the detection significance of the second feature is mimicked. We conclude that, for realistic values of the baryon density, a SKA-like galaxy survey will not be able to detect this feature with standard correlation function analysis.
Structural and electronic properties of GaAs and GaP semiconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rani, Anita; Kumar, Ranjan
2015-05-15
The Structural and Electronic properties of Zinc Blende phase of GaAs and GaP compounds are studied using self consistent SIESTA-code, pseudopotentials and Density Functional Theory (DFT) in Local Density Approximation (LDA). The Lattice Constant, Equillibrium Volume, Cohesive Energy per pair, Compressibility and Band Gap are calculated. The band gaps calcultated with DFT using LDA is smaller than the experimental values. The P-V data fitted to third order Birch Murnaghan equation of state provide the Bulk Modulus and its pressure derivatives. Our Structural and Electronic properties estimations are in agreement with available experimental and theoretical data.
Enceladus Plume Density Modeling and Reconstruction for Cassini Attitude Control System
NASA Technical Reports Server (NTRS)
Sarani, Siamak
2010-01-01
In 2005, Cassini detected jets composed mostly of water, spouting from a set of nearly parallel rifts in the crust of Enceladus, an icy moon of Saturn. During an Enceladus flyby, either reaction wheels or attitude control thrusters on the Cassini spacecraft are used to overcome the external torque imparted on Cassini due to Enceladus plume or jets, as well as to slew the spacecraft in order to meet the pointing needs of the on-board science instruments. If the estimated imparted torque is larger than it can be controlled by the reaction wheel control system, thrusters are used to control the spacecraft. Having an engineering model that can predict and simulate the external torque imparted on Cassini spacecraft due to the plume density during all projected low-altitude Enceladus flybys is important. Equally important is being able to reconstruct the plume density after each flyby in order to calibrate the model. This paper describes an engineering model of the Enceladus plume density, as a function of the flyby altitude, developed for the Cassini Attitude and Articulation Control Subsystem, and novel methodologies that use guidance, navigation, and control data to estimate the external torque imparted on the spacecraft due to the Enceladus plume and jets. The plume density is determined accordingly. The methodologies described have already been used to reconstruct the plume density for three low-altitude Enceladus flybys of Cassini in 2008 and will continue to be used on all remaining low-altitude Enceladus flybys in Cassini's extended missions.
Li, Peifang; Mei, Tingting; Lv, Linxia; Lu, Cheng; Wang, Weihua; Bao, Gang; Gutsev, Gennady L
2017-08-31
The geometrical structure and electronic properties of the neutral RhB n and singly negatively charged RhB n - clusters are obtained in the range of 3 ≤ n ≤ 10 using the unbiased CALYPSO structure search method and density functional theory (DFT). A combination of the PBE0 functional and the def2-TZVP basis set is used for determining global minima on potential energy surfaces of the Rh-doped B n clusters. The photoelectron spectra of the anions are simulated using the time-dependent density functional theory (TD-DFT) method. Good agreement between our simulated and experimentally obtained photoelectron spectra for RhB 9 - provides support to the validity of our theoretical method. The relative stabilities of the ground-state RhB n and RhB n - clusters are estimated using the calculated binding energies, second-order total energy differences, and HOMO-LUMO gaps. It is found that RhB 7 and RhB 8 - are the most stable species in the neutral and anionic series, respectively. The chemical bonding analysis reveals that the RhB 8 - cluster possesses two sets of delocalized σ and π bonds. In both cases, the Hückel 4N + 2 rule is fulfilled and this cluster possesses both σ and π aromaticities.
Sato, Tatsuhiko; Furusawa, Yoshiya
2012-10-01
Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.
Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente
2017-04-29
Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.
Vargas-Melendez, Leandro; Boada, Beatriz L.; Boada, Maria Jesus L.; Gauchia, Antonio; Diaz, Vicente
2017-01-01
Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33% of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle’s parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle’s roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle’s states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm. PMID:28468252
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lake, Sean Earl
2017-05-01
The measurement of the the Extragalactic Background Light (EBL) has seen some controversy in recent works, with direct and indirect measures conflicting. Specifi- cally, upper limits based on analyzing the plausible opacity obscuring TeV spectra of blazars suggests that the density of radiation with wavelengths near 3.4 mum is onethirdtoonehalfasintenseasdirectmeasuresofthesame(forexample: Aharonian et al., 2006; Levenson et al., 2007; Matsumoto et al., 2005). The dominant contributor of the EBL at 3.4mum is expected to be ordinary starlight from relatively local, z < 1, galaxies, so an estimate of the amount of light emitted by galaxies based on the galaxy Luminosity Function (LF) should provide a useful lower limit to the EBL. While analyses of this sort have been done by others (Dominguez et al., 2011; Helgason et al., 2012), the full sky coverage of the AllWISE database has made it possible for us to improve the measurement of both the LF at 2.4 mum and the EBL using the large public spectroscopic redshift surveys. In order to do so, we had to develop a mathematical model for the measurement of a generalization of the LF, which is the density of galaxies per unit comoving volume per unit luminosity, to the Spectro-Luminosity Functional (SLF), which replaces the density per unit single luminosity, dL, with the density per luminosi- ii ties at all frequencies, DL nu. Our best combined analysis of the data yields present day Shechter Function LF parameters of: L⋆ = 6.4+/-[0.1 stat, 0.3sys]x1010 L2.4mum [solar mass](M⋆ = -21.67+/-[0.02 stat, 0.05sys] AB mag), φ⋆ = 5.8+/-[0.3stat, 0.3sys]x10 -3 Mpc-3, and alpha = -1.050 +/- [0.004stat, 0.03sys]; this implies a present day density of galaxies of 0.08 Mpc-3 brighter that 106 L2.4mum [solar mass] (10-3 Mpc-3 brighter than L⋆) and a luminosity density equivalent to 3.8 x 108 L2.4mum [solar mass] Mpc-3. The net EBL at 3.4mum that our synthesis model produces from galaxies closer than z = 5 is Inu = 9.0 +/- 0.5 kJy sr-1 (nuInu = 8.0 +/- 0.4 nW m-2 sr -1), largely in agreement with similar LF based estimates of the EBL.
Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing
2012-01-01
Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.
Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface
NASA Astrophysics Data System (ADS)
Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai
To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.