Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Storkel, Holly L.; Lee, Jaehoon; Cox, Casey
2016-01-01
Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276
Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey
2016-11-01
Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
Evaluating detection probabilities for American marten in the Black Hills, South Dakota
Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.
2007-01-01
Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
NASA Astrophysics Data System (ADS)
Inada, Yuki; Kamiya, Tomoki; Matsuoka, Shigeyasu; Kumada, Akiko; Ikeda, Hisatoshi; Hidaka, Kunihiko
2018-01-01
Two-dimensional electron density imaging over free burning SF6 arcs and SF6 gas-blast arcs was conducted at current zero using highly sensitive Shack-Hartmann type laser wavefront sensors in order to experimentally characterise electron density distributions for the success and failure of arc interruption in the thermal reignition phase. The experimental results under an interruption probability of 50% showed that free burning SF6 arcs with axially asymmetric electron density profiles were interrupted with a success rate of 88%. On the other hand, the current interruption of SF6 gas-blast arcs was reproducibly achieved under locally reduced electron densities and the interruption success rate was 100%.
NASA Astrophysics Data System (ADS)
Vico, Giulia; Porporato, Amilcare
2013-04-01
Supplemental irrigation represents one of the main strategies to mitigate the effects of climate variability and stabilize yields. Irrigated agriculture currently provides 40% of food production and its relevance is expected to further increase in the near future, in face of the projected alterations of rainfall patterns and increase in food, fiber, and biofuel demand. Because of the significant investments and water requirements involved in irrigation, strategic choices are needed to preserve productivity and profitability, while maintaining a sustainable water management - a nontrivial task given the unpredictability of the rainfall forcing. To facilitate decision making under uncertainty, a widely applicable probabilistic framework is proposed. The occurrence of rainfall events and irrigation applications are linked probabilistically to crop development during the growing season and yields at harvest. Based on these linkages, the probability density function of yields and corresponding probability density function of required irrigation volumes, as well as the probability density function of yields under the most common case of limited water availability are obtained analytically, as a function of irrigation strategy, climate, soil and crop parameters. The full probabilistic description of the frequency of occurrence of yields and water requirements is a crucial tool for decision making under uncertainty, e.g., via expected utility analysis. Furthermore, the knowledge of the probability density function of yield allows us to quantify the yield reduction hydrologic risk. Two risk indices are defined and quantified: the long-term risk index, suitable for long-term irrigation strategy assessment and investment planning, and the real-time risk index, providing a rigorous probabilistic quantification of the emergence of drought conditions during a single growing season in an agricultural setting. Our approach employs relatively few parameters and is thus easily and broadly applicable to different crops and sites, under current and future climate scenarios. Hence, the proposed probabilistic framework provides a quantitative tool to assess the impact of irrigation strategy and water allocation on the risk of not meeting a certain target yield, thus guiding the optimal allocation of water resources for human and environmental needs.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1995-01-01
For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.
Continuation of probability density functions using a generalized Lyapunov approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl
Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.
Extended Importance Sampling for Reliability Analysis under Evidence Theory
NASA Astrophysics Data System (ADS)
Yuan, X. K.; Chen, B.; Zhang, B. Q.
2018-05-01
In early engineering practice, the lack of data and information makes uncertainty difficult to deal with. However, evidence theory has been proposed to handle uncertainty with limited information as an alternative way to traditional probability theory. In this contribution, a simulation-based approach, called ‘Extended importance sampling’, is proposed based on evidence theory to handle problems with epistemic uncertainty. The proposed approach stems from the traditional importance sampling for reliability analysis under probability theory, and is developed to handle the problem with epistemic uncertainty. It first introduces a nominal instrumental probability density function (PDF) for every epistemic uncertainty variable, and thus an ‘equivalent’ reliability problem under probability theory is obtained. Then the samples of these variables are generated in a way of importance sampling. Based on these samples, the plausibility and belief (upper and lower bounds of probability) can be estimated. It is more efficient than direct Monte Carlo simulation. Numerical and engineering examples are given to illustrate the efficiency and feasible of the proposed approach.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Global warming precipitation accumulation increases above the current-climate cutoff scale
Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.
2017-01-01
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff. PMID:28115693
Global warming precipitation accumulation increases above the current-climate cutoff scale
NASA Astrophysics Data System (ADS)
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.
2017-02-01
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.
Global warming precipitation accumulation increases above the current-climate cutoff scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less
Global warming precipitation accumulation increases above the current-climate cutoff scale.
Neelin, J David; Sahany, Sandeep; Stechmann, Samuel N; Bernstein, Diana N
2017-02-07
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.
Global warming precipitation accumulation increases above the current-climate cutoff scale
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; ...
2017-01-23
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less
A quadrature based method of moments for nonlinear Fokker-Planck equations
NASA Astrophysics Data System (ADS)
Otten, Dustin L.; Vedula, Prakash
2011-09-01
Fokker-Planck equations which are nonlinear with respect to their probability densities and occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, fermions and bosons can be challenging to solve numerically. To address some underlying challenges, we propose the application of the direct quadrature based method of moments (DQMOM) for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations (NLFPEs). In DQMOM, probability density (or other distribution) functions are represented using a finite collection of Dirac delta functions, characterized by quadrature weights and locations (or abscissas) that are determined based on constraints due to evolution of generalized moments. Three particular examples of nonlinear Fokker-Planck equations considered in this paper include descriptions of: (i) the Shimizu-Yamada model, (ii) the Desai-Zwanzig model (both of which have been developed as models of muscular contraction) and (iii) fermions and bosons. Results based on DQMOM, for the transient and stationary solutions of the nonlinear Fokker-Planck equations, have been found to be in good agreement with other available analytical and numerical approaches. It is also shown that approximate reconstruction of the underlying probability density function from moments obtained from DQMOM can be satisfactorily achieved using a maximum entropy method.
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian
2013-09-01
The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Properties of strong-coupling magneto-bipolaron qubit in quantum dot under magnetic field
NASA Astrophysics Data System (ADS)
Xu-Fang, Bai; Ying, Zhang; Wuyunqimuge; Eerdunchaolu
2016-07-01
Based on the variational method of Pekar type, we study the energies and the wave-functions of the ground and the first-excited states of magneto-bipolaron, which is strongly coupled to the LO phonon in a parabolic potential quantum dot under an applied magnetic field, thus built up a quantum dot magneto-bipolaron qubit. The results show that the oscillation period of the probability density of the two electrons in the qubit decreases with increasing electron-phonon coupling strength α, resonant frequency of the magnetic field ω c, confinement strength of the quantum dot ω 0, and dielectric constant ratio of the medium η the probability density of the two electrons in the qubit oscillates periodically with increasing time t, angular coordinate φ 2, and dielectric constant ratio of the medium η the probability of electron appearing near the center of the quantum dot is larger, and the probability of electron appearing away from the center of the quantum dot is much smaller. Project supported by the Natural Science Foundation of Hebei Province, China (Grant No. E2013407119) and the Items of Institution of Higher Education Scientific Research of Hebei Province and Inner Mongolia, China (Grant Nos. ZD20131008, Z2015149, Z2015219, and NJZY14189).
The influences of delay time on the stability of a market model with stochastic volatility
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2013-02-01
The effects of the delay time on the stability of a market model are investigated, by using a modified Heston model with a cubic nonlinearity and cross-correlated noise sources. These results indicate that: (i) There is an optimal delay time τo which maximally enhances the stability of the stock price under strong demand elasticity of stock price, and maximally reduces the stability of the stock price under weak demand elasticity of stock price; (ii) The cross correlation coefficient of noises and the delay time play an opposite role on the stability for the case of the delay time <τo and the same role for the case of the delay time >τo. Moreover, the probability density function of the escape time of stock price returns, the probability density function of the returns and the correlation function of the returns are compared with other literatures.
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
Assessing hypotheses about nesting site occupancy dynamics
Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle
2011-01-01
Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.
Gravity evidence for a shallow intrusion under Medicine Lake volcano, California.
Finn, C.; Williams, D.L.
1982-01-01
A positive gravity anomaly is associated with Medicine Lake volcano, California. Trials with different Bouguer reduction densities indicate that this positive anomaly cannot be explained by an inappropriate choice of Bouguer reduction density but must be caused by a subvolcanic body. After separating the Medicine Lake gravity high from the regional field, we were able to fit the 27mgal positive residual anomaly with a large, shallow body of high density contrast (+0.41g/cm3) and a thickness of 2.5km. We interpret this body to be an intrusion of dense material emplaced within the several-kilometres-thick older volcanic layer that probably underlies Medicine Lake volcano.-Authors
Technical Reports Prepared Under Contract N00014-76-C-0475.
1987-05-29
264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less
Mode switching in volcanic seismicity: El Hierro 2011-2013
NASA Astrophysics Data System (ADS)
Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.
2016-05-01
The Gutenberg-Richter b value is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. Such studies typically analyze discrete time windows or phases, but the choice of such windows is subjective and can introduce significant bias. Here we minimize this sample bias by iteratively sampling catalogs with randomly chosen windows and then stack the resulting probability density functions for the estimated b>˜ value to determine a net probability density function. We examine data from the El Hierro seismic catalog during a period of unrest in 2011-2013 and demonstrate clear multimodal behavior. Individual modes are relatively stable in time, but the most probable b>˜ value intermittently switches between modes, one of which is similar to that of tectonic seismicity. Multimodality is primarily associated with intermittent activation and cessation of activity in different parts of the volcanic system rather than with respect to any systematic inferred underlying process.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
ERIC Educational Resources Information Center
Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.
2010-01-01
Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…
Causal illusions in children when the outcome is frequent
2017-01-01
Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent. PMID:28898294
Modelling spruce bark beetle infestation probability
Paulius Zolubas; Jose Negron; A. Steven Munson
2009-01-01
Spruce bark beetle (Ips typographus L.) risk model, based on pure Norway spruce (Picea abies Karst.) stand characteristics in experimental and control plots was developed using classification and regression tree statistical technique under endemic pest population density. The most significant variable in spruce bark beetle...
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
The role of demographic compensation theory in incidental take assessments for endangered species
McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts
2011-01-01
Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Broadcasting but not receiving: density dependence considerations for SETI signals
NASA Astrophysics Data System (ADS)
Smith, Reginald D.
2009-04-01
This paper develops a detailed quantitative model which uses the Drake equation and an assumption of an average maximum radio broadcasting distance by an communicative civilization. Using this basis, it estimates the minimum civilization density for contact between two civilizations to be probable in a given volume of space under certain conditions, the amount of time it would take for a first contact, and the question of whether reciprocal contact is possible.
A cross-diffusion system derived from a Fokker-Planck equation with partial averaging
NASA Astrophysics Data System (ADS)
Jüngel, Ansgar; Zamponi, Nicola
2017-02-01
A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.
How likely are constituent quanta to initiate inflation?
Berezhiani, Lasha; Trodden, Mark
2015-08-06
In this study, we propose an intuitive framework for studying the problem of initial conditions in slow-roll inflation. In particular, we consider a universe at high, but sub-Planckian energy density and analyze the circumstances under which it is plausible for it to become dominated by inflated patches at late times, without appealing to the idea of self-reproduction. Our approach is based on defining a prior probability distribution for the constituent quanta of the pre-inflationary universe. To test the idea that inflation can begin under very generic circumstances, we make specific – yet quite general and well grounded – assumptions onmore » the prior distribution. As a result, we are led to the conclusion that the probability for a given region to ignite inflation at sub-Planckian densities is extremely small. Furthermore, if one chooses to use the enormous volume factor that inflation yields as an appropriate measure, we find that the regions of the universe which started inflating at densities below the self-reproductive threshold nevertheless occupy a negligible physical volume in the present universe as compared to those domains that have never inflated.« less
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A
2006-11-01
A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.
The emergence of different tail exponents in the distributions of firm size variables
NASA Astrophysics Data System (ADS)
Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki
2013-05-01
We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.
Boitard, Simon; Loisel, Patrice
2007-05-01
The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-11-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.
1984-09-28
variables before simula- tion of model - Search for reality checks a, - Express uncertainty as a probability density distribution. a. H2 a, H-22 TWIF... probability that the software con- tains errors. This prior is updated as test failure data are accumulated. Only a p of 1 (software known to contain...discusssed; both parametric and nonparametric versions are presented. It is shown by the author that the bootstrap underlies the jackknife method and
Nonstationary envelope process and first excursion probability
NASA Technical Reports Server (NTRS)
Yang, J.
1972-01-01
A definition of the envelope of nonstationary random processes is proposed. The establishment of the envelope definition makes it possible to simulate the nonstationary random envelope directly. Envelope statistics, such as the density function, joint density function, moment function, and level crossing rate, which are relevent to analyses of catastrophic failure, fatigue, and crack propagation in structures, are derived. Applications of the envelope statistics to the prediction of structural reliability under random loadings are discussed in detail.
Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki
2014-01-01
The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099
NASA Astrophysics Data System (ADS)
Donkov, Sava; Stefanov, Ivan Z.
2018-03-01
We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-01-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581
Li, Ye; Yu, Lin; Zhang, Yixin
2017-05-29
Applying the angular spectrum theory, we derive the expression of a new Hermite-Gaussian (HG) vortex beam. Based on the new Hermite-Gaussian (HG) vortex beam, we establish the model of the received probability density of orbital angular momentum (OAM) modes of this beam propagating through a turbulent ocean of anisotropy. By numerical simulation, we investigate the influence of oceanic turbulence and beam parameters on the received probability density of signal OAM modes and crosstalk OAM modes of the HG vortex beam. The results show that the influence of oceanic turbulence of anisotropy on the received probability of signal OAM modes is smaller than isotropic oceanic turbulence under the same condition, and the effect of salinity fluctuation on the received probability of the signal OAM modes is larger than the effect of temperature fluctuation. In the strong dissipation of kinetic energy per unit mass of fluid and the weak dissipation rate of temperature variance, we can decrease the effects of turbulence on the received probability of signal OAM modes by selecting a long wavelength and a larger transverse size of the HG vortex beam in the source's plane. In long distance propagation, the HG vortex beam is superior to the Laguerre-Gaussian beam for resisting the destruction of oceanic turbulence.
The Feynman-Vernon Influence Functional Approach in QED
NASA Astrophysics Data System (ADS)
Biryukov, Alexander; Shleenkov, Mark
2016-10-01
In the path integral approach we describe evolution of interacting electromagnetic and fermionic fields by the use of density matrix formalism. The equation for density matrix and transitions probability for fermionic field is obtained as average of electromagnetic field influence functional. We obtain a formula for electromagnetic field influence functional calculating for its various initial and final state. We derive electromagnetic field influence functional when its initial and final states are vacuum. We present Lagrangian for relativistic fermionic field under influence of electromagnetic field vacuum.
Liu, Yuqiang; Chen, Cui; Liu, Yunlong; Li, Wei; Wang, Zhihong; Sun, Qifeng; Zhou, Hang; Chen, Xiangjun; Yu, Yongchun; Wang, Yun; Abumaria, Nashat
2018-06-19
The TRPM7 chanzyme contributes to several biological and pathological processes in different tissues. However, its role in the CNS under physiological conditions remains unclear. Here, we show that TRPM7 knockdown in hippocampal neurons reduces structural synapse density. The synapse density is rescued by the α-kinase domain in the C terminus but not by the ion channel region of TRPM7 or by increasing extracellular concentrations of Mg 2+ or Zn 2+ . Early postnatal conditional knockout of TRPM7 in mice impairs learning and memory and reduces synapse density and plasticity. TRPM7 knockdown in the hippocampus of adult rats also impairs learning and memory and reduces synapse density and synaptic plasticity. In knockout mice, restoring expression of the α-kinase domain in the brain rescues synapse density/plasticity and memory, probably by interacting with and phosphorylating cofilin. These results suggest that brain TRPM7 is important for having normal synaptic and cognitive functions under physiological, non-pathological conditions. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Line transect estimation of population size: the exponential case with grouped data
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1979-01-01
Gates, Marshall, and Olson (1968) investigated the line transect method of estimating grouse population densities in the case where sighting probabilities are exponential. This work is followed by a simulation study in Gates (1969). A general overview of line transect analysis is presented by Burnham and Anderson (1976). These articles all deal with the ungrouped data case. In the present article, an analysis of line transect data is formulated under the Gates framework of exponential sighting probabilities and in the context of grouped data.
Lyke, Stephen D; Voelz, David G; Roggemann, Michael C
2009-11-20
The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
Synthesis and analysis of discriminators under influence of broadband non-Gaussian noise
NASA Astrophysics Data System (ADS)
Artyushenko, V. M.; Volovach, V. I.
2018-01-01
We considered the problems of the synthesis and analysis of discriminators, when the useful signal is exposed to non-Gaussian additive broadband noise. It is shown that in this case, the discriminator of the tracking meter should contain the nonlinear transformation unit, the characteristics of which are determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian broadband noise and mismatch errors. The parameters of the discriminatory and phase characteristics of the discriminators working under the above conditions are obtained. It is shown that the efficiency of non-linear processing depends on the ratio of power of FM noise to the power of Gaussian noise. The analysis of the information loss of signal transformation caused by the linear section of discriminatory characteristics of the unit of nonlinear transformations of the discriminator is carried out. It is shown that the average slope of the nonlinear transformation characteristic is determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian noise and mismatch errors.
NASA Astrophysics Data System (ADS)
Mokem Fokou, I. S.; Nono Dueyou Buckjohn, C.; Siewe Siewe, M.; Tchawoua, C.
2018-03-01
In this manuscript, a hybrid energy harvesting system combining piezoelectric and electromagnetic transduction and subjected to colored noise is investigated. By using the stochastic averaging method, the stationary probability density functions of amplitudes are obtained and reveal interesting dynamics related to the long term behavior of the device. From stationary probability densities, we discuss the stochastic bifurcation through the qualitative change which shows that noise intensity, correlation time and other system parameters can be treated as bifurcation parameters. Numerical simulations are made for a comparison with analytical findings. The Mean first passage time (MFPT) is numerical provided in the purpose to investigate the system stability. By computing the Mean residence time (TMR), we explore the stochastic resonance phenomenon; we show how it is related to the correlation time of colored noise and high output power.
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.
Estimating loblolly pine size-density trajectories across a range of planting densities
Curtis L. VanderSchaaf; Harold E. Burkhart
2013-01-01
Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...
Assessing environmental DNA detection in controlled lentic systems.
Moyer, Gregory R; Díaz-Ferguson, Edgardo; Hill, Jeffrey E; Shea, Colin
2014-01-01
Little consideration has been given to environmental DNA (eDNA) sampling strategies for rare species. The certainty of species detection relies on understanding false positive and false negative error rates. We used artificial ponds together with logistic regression models to assess the detection of African jewelfish eDNA at varying fish densities (0, 0.32, 1.75, and 5.25 fish/m3). Our objectives were to determine the most effective water stratum for eDNA detection, estimate true and false positive eDNA detection rates, and assess the number of water samples necessary to minimize the risk of false negatives. There were 28 eDNA detections in 324, 1-L, water samples collected from four experimental ponds. The best-approximating model indicated that the per-L-sample probability of eDNA detection was 4.86 times more likely for every 2.53 fish/m3 (1 SD) increase in fish density and 1.67 times less likely for every 1.02 C (1 SD) increase in water temperature. The best section of the water column to detect eDNA was the surface and to a lesser extent the bottom. Although no false positives were detected, the estimated likely number of false positives in samples from ponds that contained fish averaged 3.62. At high densities of African jewelfish, 3-5 L of water provided a >95% probability for the presence/absence of its eDNA. Conversely, at moderate and low densities, the number of water samples necessary to achieve a >95% probability of eDNA detection approximated 42-73 and >100 L, respectively. Potential biases associated with incomplete detection of eDNA could be alleviated via formal estimation of eDNA detection probabilities under an occupancy modeling framework; alternatively, the filtration of hundreds of liters of water may be required to achieve a high (e.g., 95%) level of certainty that African jewelfish eDNA will be detected at low densities (i.e., <0.32 fish/m3 or 1.75 g/m3).
NASA Astrophysics Data System (ADS)
Valageas, P.
2000-02-01
In this article we present an analytical calculation of the probability distribution of the magnification of distant sources due to weak gravitational lensing from non-linear scales. We use a realistic description of the non-linear density field, which has already been compared with numerical simulations of structure formation within hierarchical scenarios. Then, we can directly express the probability distribution P(mu ) of the magnification in terms of the probability distribution of the density contrast realized on non-linear scales (typical of galaxies) where the local slope of the initial linear power-spectrum is n=-2. We recover the behaviour seen by numerical simulations: P(mu ) peaks at a value slightly smaller than the mean < mu >=1 and it shows an extended large mu tail (as described in another article our predictions also show a good quantitative agreement with results from N-body simulations for a finite smoothing angle). Then, we study the effects of weak lensing on the derivation of the cosmological parameters from SNeIa. We show that the inaccuracy introduced by weak lensing is not negligible: {cal D}lta Omega_mega_m >~ 0.3 for two observations at z_s=0.5 and z_s=1. However, observations can unambiguously discriminate between Omega_mega_m =0.3 and Omega_mega_m =1. Moreover, in the case of a low-density universe one can clearly distinguish an open model from a flat cosmology (besides, the error decreases as the number of observ ed SNeIa increases). Since distant sources are more likely to be ``demagnified'' the most probable value of the observed density parameter Omega_mega_m is slightly smaller than its actual value. On the other hand, one may obtain some valuable information on the properties of the underlying non-linear density field from the measure of weak lensing distortions.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.
2013-11-01
A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.
ERIC Educational Resources Information Center
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
Gravity anomaly and density structure of the San Andreas fault zone
NASA Astrophysics Data System (ADS)
Wang, Chi-Yuen; Rui, Feng; Zhengsheng, Yao; Xingjue, Shi
1986-01-01
A densely spaced gravity survey across the San andreas fault zone was conducted near Bear Valley, about 180 km south of San Francisco, along a cross-section where a detailed seismic reflection profile was previously made by McEvilly (1981). With Feng and McEvilly's velocity structure (1983) of the fault zone at this cross-section as a constraint, the density structure of the fault zone is obtained through inversion of the gravity data by a method used by Parker (1973) and Oldenburg (1974). Although the resulting density picture cannot be unique, it is better constrained and contains more detailed information about the structure of the fault than was previously possible. The most striking feature of the resulting density structure is a deeply seated tongue of low-density material within the fault zone, probably representing a wedge of fault gouge between the two moving plates, which projects from the surface to the base of the seismogenic zone. From reasonable assumptions concerning the density of the solid grains and the state of saturation of the fault zone the average porosity of this low-density fault gouge is estimated as about 12%. Stress-induced cracks are not expected to create so much porosity under the pressures in the deep fault zone. Large-scaled removal of fault-zone material by hydrothermal alteration, dissolution, and subsequent fluid transport may have occurred to produce this pronounced density deficiency. In addition, a broad, funnel-shaped belt of low density appears about the upper part of the fault zone, which probably represents a belt of extensively shattered wall rocks.
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
Zerrad, M; Soriano, G; Ghabbach, A; Amra, C
2013-02-11
We show how disordered media allow to increase the local degree of polarization (DOP) of an arbitrary (partial) polarized incident beam. The role of cross-scattering coefficients is emphasized, together with the probability density functions (PDF) of the scattering DOP. The average DOP of scattering is calculated versus the incident illumination DOP.
Multivariate Epi-splines and Evolving Function Identification Problems
2015-04-15
such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction
Safe Onboard Guidance and Control Under Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars James
2011-01-01
An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.
NASA Astrophysics Data System (ADS)
Duarte Queirós, S. M.
2005-08-01
This letter reports on a stochastic dynamical scenario whose associated stationary probability density function is exactly a generalised form, with a power law instead of exponencial decay, of the ubiquitous Gamma distribution. This generalisation, also known as F-distribution, was empirically proposed for the first time to adjust for high-frequency stock traded volume distributions in financial markets and verified in experiments with granular material. The dynamical assumption presented herein is based on local temporal fluctuations of the average value of the observable under study. This proposal is related to superstatistics and thus to the current nonextensive statistical mechanics framework. For the specific case of stock traded volume, we connect the local fluctuations in the mean stock traded volume with the typical herding behaviour presented by financial traders. Last of all, NASDAQ 1 and 2 minute stock traded volume sequences and probability density functions are numerically reproduced.
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
NASA Astrophysics Data System (ADS)
Motavalli-Anbaran, Seyed-Hani; Zeyen, Hermann; Ebrahimzadeh Ardestani, Vahid
2013-02-01
We present a 3D algorithm to obtain the density structure of the lithosphere from joint inversion of free air gravity, geoid and topography data based on a Bayesian approach with Gaussian probability density functions. The algorithm delivers the crustal and lithospheric thicknesses and the average crustal density. Stabilization of the inversion process may be obtained through parameter damping and smoothing as well as use of a priori information like crustal thicknesses from seismic profiles. The algorithm is applied to synthetic models in order to demonstrate its usefulness. A real data application is presented for the area of northern Iran (with the Alborz Mountains as main target) and the South Caspian Basin. The resulting model shows an important crustal root (up to 55 km) under the Alborz Mountains and a thin crust (ca. 30 km) under the southernmost South Caspian Basin thickening northward to the Apsheron-Balkan Sill to 45 km. Central and NW Iran is underlain by a thin lithosphere (ca. 90-100 km). The lithosphere thickens under the South Caspian Basin until the Apsheron-Balkan Sill where it reaches more than 240 km. Under the stable Turan platform, we find a lithospheric thickness of 160-180 km.
The statistics of peaks of Gaussian random fields. [cosmological density fluctuations
NASA Technical Reports Server (NTRS)
Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.
1986-01-01
A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.
Colchero, Fernando; Medellin, Rodrigo A; Clark, James S; Lee, Raymond; Katul, Gabriel G
2009-05-01
1. Our understanding of the interplay between density dependence, climatic perturbations, and conservation practices on the dynamics of small populations is still limited. This can result in uninformed strategies that put endangered populations at risk. Moreover, the data available for a large number of populations in such circumstances are sparse and mined with missing data. Under the current climate change scenarios, it is essential to develop appropriate inferential methods that can make use of such data sets. 2. We studied a population of desert bighorn sheep introduced to Tiburon Island, Mexico in 1975 and subjected to irregular extractions for the last 10 years. The unique attributes of this population are absence of predation and disease, thereby permitting us to explore the combined effect of density dependence, environmental variability and extraction in a 'controlled setting.' Using a combination of nonlinear discrete models with long-term field data, we constructed three basic Bayesian state space models with increasing density dependence (DD), and the same three models with the addition of summer drought effects. 3. We subsequently used Monte Carlo simulations to evaluate the combined effect of drought, DD, and increasing extractions on the probability of population survival under two climate change scenarios (based on the Intergovernmental Panel on Climate Change predictions): (i) increase in drought variability; and (ii) increase in mean drought severity. 4. The population grew from 16 individuals introduced in 1975 to close to 700 by 1993. Our results show that the population's growth was dominated by DD, with drought having a secondary but still relevant effect on its dynamics. 5. Our predictions suggest that under climate change scenario (i), extraction dominates the fate of the population, while for scenario (ii), an increase in mean drought affects the population's probability of survival in an equivalent magnitude as extractions. Thus, for the long-term survival of the population, our results stress that a more variable environment is less threatening than one in which the mean conditions become harsher. Current climate change scenarios and their underlying uncertainty make studies such as this one crucial for understanding the dynamics of ungulate populations and their conservation.
Moran, Michael J.; Zogorski, John S.; Squillace, Paul J.
2004-01-01
The occurrence and implications of methyl tert-butyl ether (MTBE) and gasoline hydrocarbons were examined in three surveys of water quality conducted by the U.S. Geological Survey?one national-scale survey of ground water, one national-scale survey of source water from ground water, and one regional-scale survey of drinking water from ground water. The overall detection frequency of MTBE in all three surveys was similar to the detection frequencies of some other volatile organic compounds (VOCs) that have much longer production and use histories in the United States. The detection frequency of MTBE was higher in drinking water and lower in source water and ground water. However, when the data for ground water and source water were limited to the same geographic extent as drinking-water data, the detection frequencies of MTBE were comparable to the detection frequency of MTBE in drinking water. In all three surveys, the detection frequency of any gasoline hydrocarbon was less than the detection frequency of MTBE. No concentration of MTBE in source water exceeded the lower limit of U.S. Environmental Protection Agency's Drinking-Water Advisory of 20 ?g/L (micrograms per liter). One concentration of MTBE in ground water exceeded 20 ?g/L, and 0.9 percent of drinking-water samples exceeded 20 ?g/L. The overall detection frequency of MTBE relative to other widely used VOCs indicates that MTBE is an important concern with respect to ground-water management. The probability of detecting MTBE was strongly associated with population density, use of MTBE in gasoline, and recharge, and weakly associated with density of leaking underground storage tanks, soil permeability, and aquifer consolidation. Only concentrations of MTBE above 0.5 ?g/L were associated with dissolved oxygen. Ground water underlying areas with high population density, ground water underlying areas where MTBE is used as a gasoline oxygenate, and ground water underlying areas with high recharge has a greater probability of MTBE contamination. Ground water from public-supply wells and shallow ground water underlying urban land-use areas has a greater probability of MTBE contamination compared to ground water from domestic wells and ground water underlying rural land-use areas.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Contagious seed dispersal beneath heterospecific fruiting trees and its consequences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwit, Charles; Levey, Douglas, J.; Greenberg, Cathyrn, H.
2004-05-03
Kwit, Charles, D.J. Levey and Cathryn H. Greenberg. 2004. Contagious seed dispersal beneath heterospecific fruiting trees and its consequences. Oikos. 107:303-308 A n hypothesized advantage of seed dispersal is avoidance of high per capita mortality (i.e. density-dependent mortality) associated with dense populations of seeds and seedlings beneath parent trees. This hypothesis, inherent in nearly all seed dispersal studies, assumes that density effects are species-specific. Yet because many tree species exhibit overlapping fruiting phenologies and share dispersers, seeds may be deposited preferentially under synchronously fruiting heterospecific trees, another location where they may be particularly vulnerable to mortality, in this case bymore » generalist seed predators. We demonstrate that frugivores disperse higher densities of Cornus florida seeds under fruiting (female) I lex opaca trees than under non-fruiting (male) I lex trees in temperate hardwood forest settings in South Carolina, U SA . To determine if density of Cornus and/or I lex seeds influences survivorship of dispersed Cornus seeds, we followed the fates of experimentally dispersed Cornus seeds in neighborhoods of differing, manipulated background densities of Cornus and I lex seeds. We found that the probability of predation on dispersed Cornus seeds was a function of both Cornus and I lex background seed densities. H igher densities of I lex seeds negatively affected Cornus seed survivorship, and this was particularly evident as background densities of dispersed Cornus seeds increased. These results illustrate the importance of viewing seed dispersal and predation in a community context, as the pattern and intensity of density-dependent mortality may not be solely a function of conspecific densities.« less
Under-sampling trajectory design for compressed sensing based DCE-MRI.
Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting
2013-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
Bluegill growth as modified by plant density: an exploration of underlying mechanisms
Savino, Jacqueline F.; Marschall, Elizabeth A.; Stein, Roy A.
1992-01-01
Bluegill (Lepomis macrochira) growth varies inconsistently with plant density. In laboratory and field experiments, we explored mechanisms underlying bluegill growth as a function of plant and invertebrate density. In the laboratory, bluegills captured more chironomids (Chironomus riparius) than damselflies (Enallagma spp. and Ischnura spp.), but energy intake per time spent searching did not differ between damselfly and chironomid treatments. From laboratory data, we described prey encounter rates as functions of plant and invertebrate density. In Clark Lake, Ohio, we created 0.05-ha mesocosms of inshore vegetation to generate macrophyte densities of 125, 270, and 385 stems/m2 of Potamogeton and Ceratophyllum and added 46-mm bluegill (1/m2). In these mesocosms, invertebrate density increased as a function of macrophyte density. Combining this function with encounter rate functions derived from laboratory data, we predicted that bluegill growth should peak at a high macrophyte density, greater than 1000 stems/m2, even though growth should change only slightly beyond 100 stems/m2. Consistent with our predictions, bluegills did not grow differentially, nor did their use of different prey taxa differ, across macrophyte densities in the field. Bluegills preferred chironomid pupae, which were relatively few in numbers but vulnerable to predation, whereas more cryptic, chironomid larvae, which were associated with vegetation but were relatively abundant, were eaten as encountered. Bluegill avoided physid snails. Contrary to previous work, vegetation did not influence growth or diet of bluegill beyond relatively low densities owing to the interaction between capture probabilities and macroinvertebrate densities.
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
NASA Astrophysics Data System (ADS)
Bura, E.; Zhmurov, A.; Barsegov, V.
2009-01-01
Dynamic force spectroscopy and steered molecular simulations have become powerful tools for analyzing the mechanical properties of proteins, and the strength of protein-protein complexes and aggregates. Probability density functions of the unfolding forces and unfolding times for proteins, and rupture forces and bond lifetimes for protein-protein complexes allow quantification of the forced unfolding and unbinding transitions, and mapping the biomolecular free energy landscape. The inference of the unknown probability distribution functions from the experimental and simulated forced unfolding and unbinding data, as well as the assessment of analytically tractable models of the protein unfolding and unbinding requires the use of a bandwidth. The choice of this quantity is typically subjective as it draws heavily on the investigator's intuition and past experience. We describe several approaches for selecting the "optimal bandwidth" for nonparametric density estimators, such as the traditionally used histogram and the more advanced kernel density estimators. The performance of these methods is tested on unimodal and multimodal skewed, long-tailed distributed data, as typically observed in force spectroscopy experiments and in molecular pulling simulations. The results of these studies can serve as a guideline for selecting the optimal bandwidth to resolve the underlying distributions from the forced unfolding and unbinding data for proteins.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
NASA Astrophysics Data System (ADS)
Hale, Stephen Roy
Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.
A pdf-Free Change Detection Test Based on Density Difference Estimation.
Bu, Li; Alippi, Cesare; Zhao, Dongbin
2018-02-01
The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
NASA Astrophysics Data System (ADS)
Fourrate, K.; Loulidi, M.
2006-01-01
We suggest a disordered traffic flow model that captures many features of traffic flow. It is an extension of the Nagel-Schreckenberg (NaSch) stochastic cellular automata for single line vehicular traffic model. It incorporates random acceleration and deceleration terms that may be greater than one unit. Our model leads under its intrinsic dynamics, for high values of braking probability pr, to a constant flow at intermediate densities without introducing any spatial inhomogeneities. For a system of fast drivers pr→0, the model exhibits a density wave behavior that was observed in car following models with optimal velocity. The gap of the disordered model we present exhibits, for high values of pr and random deceleration, at a critical density, a power law distribution which is a hall mark of a self organized criticality phenomena.
Food webs and fishing affect parasitism of the sea urchin Eucidaris galapagensis in the Galápagos
Sonnenholzner, Jorge I.; Lafferty, Kevin D.; Ladah, Lydia B.
2011-01-01
In the Galápagos Islands, two eulimid snails parasitize the common pencil sea urchin, Eucidaris galapagensis. Past work in the Galápagos suggests that fishing reduces lobster and fish densities and, due to this relaxation of predation pressure, indirectly increases urchin densities, creating the potential for complex indirect interactions between fishing and parasitic snails. To measure indirect effects of fishing on these parasitic snails, we investigated the spatial relationships among urchins, parasitic snails, commensal crabs, and large urchin predators (hogfish and lobsters). Parasitic snails had higher densities at sites where urchins were abundant, probably due to increased resource availability. Commensal crabs that shelter under urchin spines, particularly the endemic Mithrax nodosus, preyed on the parasitic snails in aquaria, and snails were less abundant at field sites where these crabs were common. In aquaria, hogfish and lobsters readily ate crabs, but crabs were protected from predation under urchin spines, leading to a facultative mutualism between commensal crabs and urchins. In the field, fishing appeared to indirectly increase the abundance of urchins and their commensal crabs by reducing predation pressure from fish and lobsters. Fished sites had fewer snails per urchin, probably due to increased predation from commensal crabs. However, because fished sites also tended to have more urchins, there was no significant net effect of fishing on the number of snails per square meter. These results suggest that fishing can have complex indirect effects on parasites by altering food webs.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han
2011-01-01
The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143
Singular solution of the Feller diffusion equation via a spectral decomposition.
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
Singular solution of the Feller diffusion equation via a spectral decomposition
NASA Astrophysics Data System (ADS)
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
Back in the saddle: large-deviation statistics of the cosmic log-density field
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.
2016-08-01
We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
Distribution of distances between DNA barcode labels in nanochannels close to the persistence length
NASA Astrophysics Data System (ADS)
Reinhart, Wesley F.; Reifenberger, Jeff G.; Gupta, Damini; Muralidhar, Abhiram; Sheats, Julian; Cao, Han; Dorfman, Kevin D.
2015-02-01
We obtained experimental extension data for barcoded E. coli genomic DNA molecules confined in nanochannels from 40 nm to 51 nm in width. The resulting data set consists of 1 627 779 measurements of the distance between fluorescent probes on 25 407 individual molecules. The probability density for the extension between labels is negatively skewed, and the magnitude of the skewness is relatively insensitive to the distance between labels. The two Odijk theories for DNA confinement bracket the mean extension and its variance, consistent with the scaling arguments underlying the theories. We also find that a harmonic approximation to the free energy, obtained directly from the probability density for the distance between barcode labels, leads to substantial quantitative error in the variance of the extension data. These results suggest that a theory for DNA confinement in such channels must account for the anharmonic nature of the free energy as a function of chain extension.
Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise.
Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang
2017-02-14
In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α -stable (S α S) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf).
Djordjević, Tijana; Radović, Ivan; Despoja, Vito; Lyon, Keenan; Borka, Duško; Mišković, Zoran L
2018-01-01
We present an analytical modeling of the electron energy loss (EEL) spectroscopy data for free-standing graphene obtained by scanning transmission electron microscope. The probability density for energy loss of fast electrons traversing graphene under normal incidence is evaluated using an optical approximation based on the conductivity of graphene given in the local, i.e., frequency-dependent form derived by both a two-dimensional, two-fluid extended hydrodynamic (eHD) model and an ab initio method. We compare the results for the real and imaginary parts of the optical conductivity in graphene obtained by these two methods. The calculated probability density is directly compared with the EEL spectra from three independent experiments and we find very good agreement, especially in the case of the eHD model. Furthermore, we point out that the subtraction of the zero-loss peak from the experimental EEL spectra has a strong influence on the analytical model for the EEL spectroscopy data. Copyright © 2017 Elsevier B.V. All rights reserved.
Rufibach, Kaspar; Burger, Hans Ulrich; Abt, Markus
2016-09-01
Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u-shape very similar, but not equal, to a β-distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise
Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang
2017-01-01
In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α-stable (SαS) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf). PMID:28216590
Polarization effects on quantum levels in InN/GaN quantum wells.
Lin, Wei; Li, Shuping; Kang, Junyong
2009-12-02
Polarization effects on quantum states in InN/GaN quantum wells have been investigated by means of ab initio calculation and spectroscopic ellipsometry. Through the position-dependent partial densities of states, our results show that the polarization modified by the strain with different well thickness leads to an asymmetry band bending of the quantum well. The quantum levels are identified via the band structures and their square wave function distributions are analyzed by the partial charge densities. Further theoretical and experimental comparison of the imaginary part of the dielectric function show that the overall transition probability increases under larger polarization fields, which can be attributable to the fact that the excited quantum states of 2h have a greater overlap with 1e states and enhance other hole quantum states in the well by a hybridization. These results would provide a new approach to improve the transition probability and light emission by enhancing the polarization fields in a proper way.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Gunier, R B; Harnly, M E; Reynolds, P; Hertz, A; Von Behren, J
2001-01-01
Several studies have suggested an association between childhood cancer and pesticide exposure. California leads the nation in agricultural pesticide use. A mandatory reporting system for all agricultural pesticide use in the state provides information on the active ingredient, amount used, and location. We calculated pesticide use density to quantify agricultural pesticide use in California block groups for a childhood cancer study. Pesticides with similar toxicologic properties (probable carcinogens, possible carcinogens, genotoxic compounds, and developmental or reproductive toxicants) were grouped together for this analysis. To prioritize pesticides, we weighted pesticide use by the carcinogenic and exposure potential of each compound. The top-ranking individual pesticides were propargite, methyl bromide, and trifluralin. We used a geographic information system to calculate pesticide use density in pounds per square mile of total land area for all United States census-block groups in the state. Most block groups (77%) averaged less than 1 pound per square mile of use for 1991-1994 for pesticides classified as probable human carcinogens. However, at the high end of use density (> 90th percentile), there were 493 block groups with more than 569 pounds per square mile. Approximately 170,000 children under 15 years of age were living in these block groups in 1990. The distribution of agricultural pesticide use and number of potentially exposed children suggests that pesticide use density would be of value for a study of childhood cancer. PMID:11689348
Three statistical models for estimating length of stay.
Selvin, S
1977-01-01
The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program. PMID:914532
Three statistical models for estimating length of stay.
Selvin, S
1977-01-01
The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program.
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude
Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander
2016-01-01
Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
Optimal estimation for the satellite attitude using star tracker measurements
NASA Technical Reports Server (NTRS)
Lo, J. T.-H.
1986-01-01
An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
NASA Astrophysics Data System (ADS)
Trolliet, Franck; Forget, Pierre-Michel; Doucet, Jean-Louis; Gillet, Jean-François; Hambuckers, Alain
2017-11-01
Animal-mediated seed dispersal is recognized to influence the spatial organization of plant communities but little is known about how frugivores cause such patterns. Here, we explored the role of hornbills and primates in generating recruitment foci under two zoochoric trees, namely Staudtia kamerunensis (Myristicaceae) and Dialium spp. (Fabaceae - Caesalpiniodea) in a forest-savanna mosaic landscape in D.R. Congo. We also examined the influence of the availability of fruits in the neighborhood and the amount of forest cover in the landscape on such clumping patterns. The density and species richness of hornbill-dispersed and the density of primate-dispersed seedlings were significantly higher under Staudtia kamerunensis trees than at control locations. However, we did not find such patterns under Dialium spp. trees compared to control locations except for the density of hornbill-dispersed seedlings which was lower at control locations. Also, we found that an increasing amount of forest cover in the landscape was associated with an increase in the density of hornbill-dispersed seedlings, although the tendency was weak (R2 = 0.065). We concluded that S. kamerunensis acts as a recruitment foci and plays a structuring role in Afrotropical forests. Hornbills were probably the main frugivore taxon responsible for the clumping under that tree and appear as a key ecological component in fragmented and disturbed landscapes where the diversity of large frugivores such as primates is reduced. Our findings improve our understanding of the causal mechanisms responsible for the spatial organization of tropical forests.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Epidemic spreading between two coupled subpopulations with inner structures
NASA Astrophysics Data System (ADS)
Ruan, Zhongyuan; Tang, Ming; Gu, Changgui; Xu, Jinshan
2017-10-01
The structure of underlying contact network and the mobility of agents are two decisive factors for epidemic spreading in reality. Here, we study a model consisting of two coupled subpopulations with intra-structures that emphasizes both the contact structure and the recurrent mobility pattern of individuals simultaneously. We show that the coupling of the two subpopulations (via interconnections between them and round trips of individuals) makes the epidemic threshold in each subnetwork to be the same. Moreover, we find that the interconnection probability between two subpopulations and the travel rate are important factors for spreading dynamics. In particular, as a function of interconnection probability, the epidemic threshold in each subpopulation decreases monotonously, which enhances the risks of an epidemic. While the epidemic threshold displays a non-monotonic variation as travel rate increases. Moreover, the asymptotic infected density as a function of travel rate in each subpopulation behaves differently depending on the interconnection probability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
Probabilistic structural analysis of aerospace components using NESSUS
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.
1988-01-01
Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.
NASA Astrophysics Data System (ADS)
DeMarco, Adam Ward
The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.
Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory
NASA Astrophysics Data System (ADS)
Rahimi, A.; Zhang, L.
2012-12-01
Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels
NASA Astrophysics Data System (ADS)
Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan
2017-12-01
This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.
Minimal entropy approximation for cellular automata
NASA Astrophysics Data System (ADS)
Fukś, Henryk
2014-02-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
NASA Astrophysics Data System (ADS)
Nagae, Yuki; Kurosawa, Masashi; Shibayama, Shigehisa; Araidai, Masaaki; Sakashita, Mitsuo; Nakatsuka, Osamu; Shiraishi, Kenji; Zaima, Shigeaki
2016-08-01
We have carried out density functional theory (DFT) calculation for Si1- x Sn x alloy and investigated the effect of the displacement of Si and Sn atoms with strain relaxation on the lattice constant and E- k dispersion. We calculated the formation probabilities for all atomic configurations of Si1- x Sn x according to the Boltzmann distribution. The average lattice constant and E- k dispersion were weighted by the formation probability of each configuration of Si1- x Sn x . We estimated the displacement of Si and Sn atoms from the initial tetrahedral site in the Si1- x Sn x unit cell considering structural relaxation under hydrostatic pressure, and we found that the breaking of the degenerated electronic levels of the valence band edge could be caused by the breaking of the tetrahedral symmetry. We also calculated the E- k dispersion of the Si1- x Sn x alloy by the DFT+U method and found that a Sn content above 50% would be required for the indirect-direct transition.
Li, Xin; Li, Ye
2015-01-01
Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.
The effect of magnetic field on RbCl quantum pseudodot qubit
NASA Astrophysics Data System (ADS)
Xiao, Jing-Lin
2015-07-01
Under the condition of strong electron-LO-phonon coupling in a RbCl quantum pseudodot (QPD) with an applied magnetic field (MF), the eigenenergies and the eigenfunctions of the ground and the first excited states (GFES) are obtained by using a variational method of the Pekar type (VMPT). A single qubit can be realized in this two-level quantum system. The electron’s probability density oscillates in the RbCl QPD with a certain period of T0 = 7.933 fs when the electron is in the superposition state of the GFES. The results indicate that due to the presence of the asymmetrical structure in the z direction of the RbCl QPD, the electron’s probability density shows double-peak configuration, whereas there is only peak if the confinement is a symmetric structure in the x and y directions of the RbCl QPD. The oscillating period is an increasing function of the cyclotron frequency and the polaron radius, whereas it is a decreasing one of the chemical potential of the two-dimensional electron gas and the zero point of the pseudoharmonic potential (PP).
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)
NASA Astrophysics Data System (ADS)
Peters, Christina; Malz, Alex; Hlozek, Renée
2018-01-01
The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
Ant-inspired density estimation via random walks.
Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A
2017-10-03
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.
NASA Astrophysics Data System (ADS)
Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe
2016-08-01
In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Continuous-time random-walk model for financial distributions
NASA Astrophysics Data System (ADS)
Masoliver, Jaume; Montero, Miquel; Weiss, George H.
2003-02-01
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
Millimeter-wave Line Ratios and Sub-beam Volume Density Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroy, Adam K.; Gallagher, Molly; Usero, Antonio
We explore the use of mm-wave emission line ratios to trace molecular gas density when observations integrate over a wide range of volume densities within a single telescope beam. For observations targeting external galaxies, this case is unavoidable. Using a framework similar to that of Krumholz and Thompson, we model emission for a set of common extragalactic lines from lognormal and power law density distributions. We consider the median density of gas that produces emission and the ability to predict density variations from observed line ratios. We emphasize line ratio variations because these do not require us to know themore » absolute abundance of our tracers. Patterns of line ratio variations have the potential to illuminate the high-end shape of the density distribution, and to capture changes in the dense gas fraction and median volume density. Our results with and without a high-density power law tail differ appreciably; we highlight better knowledge of the probability density function (PDF) shape as an important area. We also show the implications of sub-beam density distributions for isotopologue studies targeting dense gas tracers. Differential excitation often implies a significant correction to the naive case. We provide tabulated versions of many of our results, which can be used to interpret changes in mm-wave line ratios in terms of adjustments to the underlying density distributions.« less
A statistical analysis of the elastic distortion and dislocation density fields in deformed crystals
Mohamed, Mamdouh S.; Larson, Bennett C.; Tischler, Jonathan Z.; ...
2015-05-18
The statistical properties of the elastic distortion fields of dislocations in deforming crystals are investigated using the method of discrete dislocation dynamics to simulate dislocation structures and dislocation density evolution under tensile loading. Probability distribution functions (PDF) and pair correlation functions (PCF) of the simulated internal elastic strains and lattice rotations are generated for tensile strain levels up to 0.85%. The PDFs of simulated lattice rotation are compared with sub-micrometer resolution three-dimensional X-ray microscopy measurements of rotation magnitudes and deformation length scales in 1.0% and 2.3% compression strained Cu single crystals to explore the linkage between experiment and the theoreticalmore » analysis. The statistical properties of the deformation simulations are analyzed through determinations of the Nye and Kr ner dislocation density tensors. The significance of the magnitudes and the length scales of the elastic strain and the rotation parts of dislocation density tensors are demonstrated, and their relevance to understanding the fundamental aspects of deformation is discussed.« less
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
NASA Astrophysics Data System (ADS)
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Competition between harvester ants and rodents in the cold desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.
1979-09-30
Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less
Breeding phenology of birds: mechanisms underlying seasonal declines in the risk of nest predation.
Borgmann, Kathi L; Conway, Courtney J; Morrison, Michael L
2013-01-01
Seasonal declines in avian clutch size are well documented, but seasonal variation in other reproductive parameters has received less attention. For example, the probability of complete brood mortality typically explains much of the variation in reproductive success and often varies seasonally, but we know little about the underlying cause of that variation. This oversight is surprising given that nest predation influences many other life-history traits and varies throughout the breeding season in many songbirds. To determine the underlying causes of observed seasonal decreases in risk of nest predation, we modeled nest predation of Dusky Flycatchers (Empidonax oberholseri) in northern California as a function of foliage phenology, energetic demand, developmental stage, conspecific nest density, food availability for nest predators, and nest predator abundance. Seasonal variation in the risk of nest predation was not associated with seasonal changes in energetic demand, conspecific nest density, or predator abundance. Instead, seasonal variation in the risk of nest predation was associated with foliage density (early, but not late, in the breeding season) and seasonal changes in food available to nest predators. Supplemental food provided to nest predators resulted in a numerical response by nest predators, increasing the risk of nest predation at nests that were near supplemental feeders. Our results suggest that seasonal changes in foliage density and factors associated with changes in food availability for nest predators are important drivers of temporal patterns in risk of avian nest predation.
Breeding Phenology of Birds: Mechanisms Underlying Seasonal Declines in the Risk of Nest Predation
Borgmann, Kathi L.; Conway, Courtney J.; Morrison, Michael L.
2013-01-01
Seasonal declines in avian clutch size are well documented, but seasonal variation in other reproductive parameters has received less attention. For example, the probability of complete brood mortality typically explains much of the variation in reproductive success and often varies seasonally, but we know little about the underlying cause of that variation. This oversight is surprising given that nest predation influences many other life-history traits and varies throughout the breeding season in many songbirds. To determine the underlying causes of observed seasonal decreases in risk of nest predation, we modeled nest predation of Dusky Flycatchers (Empidonax oberholseri) in northern California as a function of foliage phenology, energetic demand, developmental stage, conspecific nest density, food availability for nest predators, and nest predator abundance. Seasonal variation in the risk of nest predation was not associated with seasonal changes in energetic demand, conspecific nest density, or predator abundance. Instead, seasonal variation in the risk of nest predation was associated with foliage density (early, but not late, in the breeding season) and seasonal changes in food available to nest predators. Supplemental food provided to nest predators resulted in a numerical response by nest predators, increasing the risk of nest predation at nests that were near supplemental feeders. Our results suggest that seasonal changes in foliage density and factors associated with changes in food availability for nest predators are important drivers of temporal patterns in risk of avian nest predation. PMID:23776566
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulyamov, G., E-mail: Gulyamov1949@rambler.ru; Sharibaev, N. U.
2011-02-15
The temporal dependence of thermal generation of electrons from occupied surface states at the semiconductor-insulator interface in a metal-insulator-semiconductor structure is studied. It is established that, at low temperatures, the derivative of the probability of depopulation of occupied surface states with respect to energy is represented by the Dirac {delta} function. It is shown that the density of states of a finite number of discrete energy levels under high-temperature measurements manifests itself as a continuous spectrum, whereas this spectrum appears discrete at low temperatures. A method for processing the continuous spectrum of the density of surface states is suggested thatmore » method makes it possible to determine the discrete energy spectrum. The obtained results may be conducive to an increase in resolution of the method of non-stationary spectroscopy of surface states.« less
NASA Astrophysics Data System (ADS)
Oladi, Mahshid; Shokri, Mohammad Reza; Rajabi-Maham, Hassan
2017-06-01
The `Coral Health Chart' has become a popular tool for monitoring coral bleaching worldwide. The scleractinian coral Acropora downingi (Wallace 1999) is highly vulnerable to temperature anomalies in the Persian Gulf. Our study tested the reliability of Coral Health Chart scores for the assessment of bleaching-related changes in the mitotic index (MI) and density of zooxanthellae cells in A. downingi in Qeshm Island, the Persian Gulf. The results revealed that, at least under severe conditions, it can be used as an effective proxy for detecting changes in the density of normal, transparent, or degraded zooxanthellae and MI. However, its ability to discern changes in pigment concentration and total zooxanthellae density should be viewed with some caution in the Gulf region, probably because the high levels of environmental variability in this region result in inherent variations in the characteristics of zooxanthellae among "healthy" looking corals.
Modeling of the reactant conversion rate in a turbulent shear flow
NASA Technical Reports Server (NTRS)
Frankel, S. H.; Madnia, C. K.; Givi, P.
1992-01-01
Results are presented of direct numerical simulations (DNS) of spatially developing shear flows under the influence of infinitely fast chemical reactions of the type A + B yields Products. The simulation results are used to construct the compositional structure of the scalar field in a statistical manner. The results of this statistical analysis indicate that the use of a Beta density for the probability density function (PDF) of an appropriate Shvab-Zeldovich mixture fraction provides a very good estimate of the limiting bounds of the reactant conversion rate within the shear layer. This provides a strong justification for the implementation of this density in practical modeling of non-homogeneous turbulent reacting flows. However, the validity of the model cannot be generalized for predictions of higher order statistical quantities. A closed form analytical expression is presented for predicting the maximum rate of reactant conversion in non-homogeneous reacting turbulence.
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Stochastic characteristics and Second Law violations of atomic fluids in Couette flow
NASA Astrophysics Data System (ADS)
Raghavan, Bharath V.; Karimi, Pouyan; Ostoja-Starzewski, Martin
2018-04-01
Using Non-equilibrium Molecular Dynamics (NEMD) simulations, we study the statistical properties of an atomic fluid undergoing planar Couette flow, in which particles interact via a Lennard-Jones potential. We draw a connection between local density contrast and temporal fluctuations in the shear stress, which arise naturally through the equivalence between the dissipation function and entropy production according to the fluctuation theorem. We focus on the shear stress and the spatio-temporal density fluctuations and study the autocorrelations and spectral densities of the shear stress. The bispectral density of the shear stress is used to measure the degree of departure from a Gaussian model and the degree of nonlinearity induced in the system owing to the applied strain rate. More evidence is provided by the probability density function of the shear stress. We use the Information Theory to account for the departure from Gaussian statistics and to develop a more general probability distribution function that captures this broad range of effects. By accounting for negative shear stress increments, we show how this distribution preserves the violations of the Second Law of Thermodynamics observed in planar Couette flow of atomic fluids, and also how it captures the non-Gaussian nature of the system by allowing for non-zero higher moments. We also demonstrate how the temperature affects the band-width of the shear-stress and how the density affects its Power Spectral Density, thus determining the conditions under which the shear-stress acts is a narrow-band or wide-band random process. We show that changes in the statistical characteristics of the parameters of interest occur at a critical strain rate at which an ordering transition occurs in the fluid causing shear thinning and affecting its stability. A critical strain rate of this kind is also predicted by the Loose-Hess stability criterion.
Cavagnaro, Daniel R; Myung, Jay I; Pitt, Mark A; Kujala, Janne V
2010-04-01
Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach
Spectrum sensing based on cumulative power spectral density
NASA Astrophysics Data System (ADS)
Nasser, A.; Mansour, A.; Yao, K. C.; Abdallah, H.; Charara, H.
2017-12-01
This paper presents new spectrum sensing algorithms based on the cumulative power spectral density (CPSD). The proposed detectors examine the CPSD of the received signal to make a decision on the absence/presence of the primary user (PU) signal. Those detectors require the whiteness of the noise in the band of interest. The false alarm and detection probabilities are derived analytically and simulated under Gaussian and Rayleigh fading channels. Our proposed detectors present better performance than the energy (ED) or the cyclostationary detectors (CSD). Moreover, in the presence of noise uncertainty (NU), they are shown to provide more robustness than ED, with less performance loss. In order to neglect the NU, we modified our algorithms to be independent from the noise variance.
Cho, Hyun-Ju; Park, Dong-Uk; Yoon, Jisun; Lee, Eun; Yang, Song-I; Kim, Young-Ho; Lee, So-Yeon
2017-01-01
Background Children who were only exposed to a mixture of chloromethylisothiazolinone (CMIT) and methylisothiazolinone (MIT) as humidifier disinfectant (HD) components were evaluated for humidifier disinfectant-associated lung injury (HDLI) from 2012. This study was to evaluate the pulmonary function using, impulse oscillometry (IOS) for children exposed to a mixture of CMIT/MIT from HD. Methods Twenty-four children who were only exposed to a mixture of CMIT/MIT, with no previous underlying disease, were assessed by IOS. Diagnostic criteria for HDLI were categorized as definite, probable, possible, or unlikely. Home visits and administration of a standardized questionnaire were arranged to assess exposure characteristics. Results Definite and probable cases showed higher airborne disinfectant exposure intensity during sleep (32.4 ± 8.7 μg/m3) and younger age at initial exposure (3.5 ± 3.3 months) compared with unlikely cases (17.3 ± 11.0 μg/m3, p = 0.026; 22.5 ± 26.2 months, p = 0.039, respectively). Reactance at 5 Hz was significantly more negative in those with high-density exposure during sleep (mean, -0.463 kPa/L/s vs. low density, -0.296, p = 0.001). The reactance area was also higher with high-density exposure during sleep (mean, 3.240 kPa/L vs. low density, 1.922, p = 0.039). The mean bronchodilator response with high-density exposure was within the normal range for reactance. Conclusions Significant peripheral airway dysfunction were found in children with high levels of inhalation exposure to a mixture of CMIT/MIT during sleep. Strict regulation of a mixture of CMIT/MIT exposure were associated with positive effects on lung function of children. PMID:28453578
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Versino, Daniele; Bronkhorst, Curt Allan
The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less
NASA Astrophysics Data System (ADS)
Boslough, M.
2011-12-01
Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based global temperature anomaly data published by NASS GISS. Typical climate contracts predict the probability of a specified future temperature, but not the probability density or best estimate. One way to generate a probability distribution would be to create a family of contracts over a range of specified temperatures and interpret the price of each contract as its exceedance probability. The resulting plot of probability vs. anomaly is the market-based cumulative density function. The best estimate can be determined by interpolation, and the market-based uncertainty estimate can be based on the spread. One requirement for an effective prediction market is liquidity. Climate contracts are currently considered somewhat of a novelty and often lack sufficient liquidity, but climate change has the potential to generate both tremendous losses for some (e.g. agricultural collapse and extreme weather events) and wealth for others (access to natural resources and trading routes). Use of climate markets by large stakeholders has the potential to generate the liquidity necessary to make them viable. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.
Haberbeck, L U; Oliveira, R C; Vivijs, B; Wenseleers, T; Aertsen, A; Michiels, C; Geeraerd, A H
2015-02-01
This study investigated the variation in growth/no growth boundaries of 188 Escherichia coli strains. Experiments were conducted in Luria-Bertani media under 36 combinations of lactic acid (LA) (0 and 25 mM), pH (3.8, 3.9, 4.0, 4.1, 4.2 and 4.3 for 0 mM LA and 4.3, 4.4, 4.5, 4.6, 4.7 and 4.8 for 25 mM LA) and temperature (20, 25 and 30 °C). After 3 days of incubation, growth was monitored through optical density measurements. For each strain, a so-called purposeful selection approach was used to fit a logistic regression model that adequately predicted the likelihood for growth. Further, to assess the growth/no growth variability for all the strains at once, a generalized linear mixed model was fitted to the data. Strain was fitted as a fixed factor and replicate as a random blocking factor. E. coli O157:H7 strain ATCC 43888 was used as reference strain allowing a comparison with the other strains. Out of the 188 strains tested, 140 strains (∼75%) presented a significantly higher probability of growth under low pH conditions than the O157:H7 strain ATCC 43888, whereas 20 strains (∼11%) showed a significantly lower probability of growth under high pH conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
A theory of stationarity and asymptotic approach in dissipative systems
NASA Astrophysics Data System (ADS)
Rubel, Michael Thomas
2007-05-01
The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.
Car accidents induced by a bottleneck
NASA Astrophysics Data System (ADS)
Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid
2017-12-01
Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.
Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination
Sinkkonen, Aki
2005-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163
Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination
Sinkkonen, Aki
2006-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
Biological properties of disturbed and undisturbed Cerrado sensu stricto from Northeast Brazil.
Araújo, A S F; Magalhaes, L B; Santos, V M; Nunes, L A P L; Dias, C T S
2017-03-01
The aim of this study was to measure soil microbial biomass and soil surface fauna in undisturbed and disturbed Cerrado sensu stricto (Css) from Sete Cidades National Park, Northeast Brazil. The following sites were sampled under Cerrado sensu stricto (Css) at the park: undisturbed and disturbed Css (slash-and-burn agricultural practices). Total organic and microbial biomass C were higher in undisturbed than in disturbed sites in both seasons. However, microbial biomass C was higher in the wet than in the dry season. Soil respiration did not vary among sites but was higher in the wet than in the dry season. The densities of Araneae, Coleoptera, and Orthoptera were higher in the undisturbed site, whereas the densities of Formicidae were higher in the disturbed site. Non-metric multidimensional scaling analysis separated undisturbed from disturbed sites according to soil biological properties. Disturbance by agricultural practices, such as slash-and-burn, probably resulted in the deterioration of the biological properties of soil under native Cerrado sensu stricto in the Sete Cidades National Park.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eremin, N. N., E-mail: neremin@geol.msu.ru; Grechanovsky, A. E.; Marchenko, E. I.
Semi-empirical and ab initio theoretical investigation of crystal structure geometry, interatomic distances, phase densities and elastic properties for some CaAl{sub 2}O{sub 4} phases under pressures up to 200 GPa was performed. Two independent simulation methods predicted the appearance of a still unknown super-dense CaAl{sub 2}O{sub 4} modification. In this structure, the Al coordination polyhedron might be described as distorted one with seven vertices. Ca atoms were situated inside polyhedra with ten vertices and Ca–O distances from 1.96 to 2.49 Å. It became the densest modification under pressures of 170 GPa (density functional theory prediction) or 150 GPa (semi-empirical prediction). Bothmore » approaches indicated that this super-dense CaAl{sub 2}O{sub 4} modification with a “stuffed α-PbO{sub 2}” type structure could be a probable candidate for mutual accumulation of Ca and Al in the lower mantle. The existence of this phase can be verified experimentally using high pressure techniques.« less
Meddad-Hamza, Amel; Hamza, Nabila; Neffar, Souad; Beddiar, Arifa; Gianinazzi, Silvio; Chenchouni, Haroun
2017-04-01
This study aims to determine the spatiotemporal dynamics of root colonization and spore density of arbuscular mycorrhizal fungi (AMF) in the rhizosphere of olive trees (Olea europaea) with different plantation ages and under different climatic areas in Algeria. Soil and root samples were seasonally collected from three olive plantations of different ages. Other samples were carried out in productive olive orchards cultivated under a climatic gradient (desertic, semi-arid, subhumid, and humid). The olive varieties analysed in this study were Blanquette, Rougette, Chemlel and the wild-olive. Spore density, mycorrhization intensity (M%), spore diversity and the most probable number (MPN) were determined. Both the intensity of mycorrhizal colonization and spore density increased with the increase of seasonal precipitation and decreased with the increase of air temperature regardless of the climatic region or olive variety. The variety Rougette had the highest mycorrhizal levels in all plantation ages and climates. Spore community was composed of the genera Rhizophagus, Funneliformis, Glomus, Septoglomus, Gigaspora, Scutellospora and Entrophospora. The genus Glomus, with four species, predominated in all climate regions. Spores of Gigaspora sp. and Scutellospora sp. were the most abundant in desertic plantations. Statistical models indicated a positive relationship between spore density and M% during spring and winter in young seedlings and old plantations. A significant positive relationship was found between MPN and spore density under different climates. For a mycotrophic species, the rhizosphere of olive trees proved to be poor in mycorrhiza in terms of mycorrhizal colonization and numbers of the infective AMF propagules. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
ERIC Educational Resources Information Center
Rispens, Judith; Baker, Anne; Duinmeijer, Iris
2015-01-01
Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words
Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.
2012-01-01
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774
Fractional Brownian motion with a reflecting wall
NASA Astrophysics Data System (ADS)
Wada, Alexander H. O.; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior
NASA Astrophysics Data System (ADS)
Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah
2018-01-01
The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.
Effects of heterogeneous traffic with speed limit zone on the car accidents
NASA Astrophysics Data System (ADS)
Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.
2016-06-01
Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
Ant-inspired density estimation via random walks
Musco, Cameron; Su, Hsin-Hao
2017-01-01
Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146
Miller, Tom E X
2007-07-01
1. It is widely accepted that density-dependent processes play an important role in most natural populations. However, persistent challenges in our understanding of density-dependent population dynamics include evaluating the shape of the relationship between density and demographic rates (linear, concave, convex), and identifying extrinsic factors that can mediate this relationship. 2. I studied the population dynamics of the cactus bug Narnia pallidicornis on host plants (Opuntia imbricata) that varied naturally in relative reproductive effort (RRE, the proportion of meristems allocated to reproduction), an important plant quality trait. I manipulated per-plant cactus bug densities, quantified subsequent dynamics, and fit stage-structured models to the experimental data to ask if and how density influences demographic parameters. 3. In the field experiment, I found that populations with variable starting densities quickly converged upon similar growth trajectories. In the model-fitting analyses, the data strongly supported a model that defined the juvenile cactus bug retention parameter (joint probability of surviving and not dispersing) as a nonlinear decreasing function of density. The estimated shape of this relationship shifted from concave to convex with increasing host-plant RRE. 4. The results demonstrate that host-plant traits are critical sources of variation in the strength and shape of density dependence in insects, and highlight the utility of integrated experimental-theoretical approaches for identifying processes underlying patterns of change in natural populations.
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
Planetesimal and Protoplanet Dynamics in a Turbulent Protoplanetary Disk
NASA Astrophysics Data System (ADS)
Yang, Chao-Chin; Mac Low, M.; Menou, K.
2010-01-01
In core accretion scenario of planet formation, kilometer-sized planetesimals are the building blocks toward planetary cores. Their dynamics, however, are strongly influenced by their natal protoplanetary gas disks. It is generally believed that these disks are turbulent, most likely due to magnetorotational instability. The resulting density perturbations in the gas render the movement of the particles a random process. Depending on its strength, this process might cause several interesting consequences in the course of planet formation, specifically the survivability of objects under rapid inward type-I migration and/or collisional destruction. Using the local-shearing-box approximation, we conduct numerical simulations of planetesimals moving in a turbulent, magnetized gas disk, either unstratified or vertically stratified. We produce a fiducial disk model with turbulent accretion of Shakura-Sunyaev alpha about 10-2 and root-mean-square density perturbation of about 10% and statistically characterize the evolution of the orbital properties of the particles moving in the disk. These measurements result in accurate calibration of the random process of particle orbital change, indicating noticeably smaller magnitudes than predicted by global simulations, although the results may depend on the size of the shearing box. We apply these results to revisit the survivability of planetesimals under collisional destruction or protoplanets under type-I migration. Planetesimals are probably secure from collisional destruction, except for kilometer-sized objects situated in the outer regions of a young protoplanetary disk. On the other hand, we confirm earlier studies of local models in that type-I migration probably dominates diffusive migration due to stochastic torques for most planetary cores and terrestrial planets. Discrepancies in the derived magnitude of turbulence between local and global simulations of magnetorotationally unstable disks remains an open issue, with important consequences for planet formation scenarios.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Towards Determining the Optimal Density of Groundwater Observation Networks under Uncertainty
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Kokosi, Angeliki; Mamounakis, Georgios
2016-04-01
Time series of groundwater level constitute one of the main sources of information when studying the availability of ground water reserves, at a regional level, under changing climatic conditions. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the structure of the aquifer, and in particular by the spatial distribution of hydraulic conductivity (i.e. layering), dependencies in the transition rates between different geologic formations, juxtapositional tendencies, etc. In this work, we: 1) use the concept of transition probabilities embedded in a Markov chain setting to conditionally simulate synthetic aquifer structures representative of geologic formations commonly found in the literature (see e.g. Hoeksema and Kitanidis, 1985), and 2) study how the density of observation wells affects the estimation accuracy of hydraulic heads at unobserved locations. The obtained results are promising, pointing towards the direction of establishing design criteria based on the statistical structure of the aquifer, such as the level of dependence in the transition rates of observed lithologies. Reference: Hoeksema, R.J. and P.K. Kitanidis (1985) Analysis of spatial structure of properties of selected aquifers, Water Resources Research, 21(4), 563-572. Acknowledgments: This work is sponsored by the Onassis Foundation under the "Special Grant and Support Program for Scholars' Association Members".
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.
ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density
Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro
2018-01-01
The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345
NASA Astrophysics Data System (ADS)
Saha, Srilekha; Maiti, Santanu K.; Karmakar, S. N.
2016-09-01
Electronic behavior of a 1D Aubry chain with Hubbard interaction is critically analyzed in presence of electric field. Multiple energy bands are generated as a result of Hubbard correlation and Aubry potential, and, within these bands localized states are developed under the application of electric field. Within a tight-binding framework we compute electronic transmission probability and average density of states using Green's function approach where the interaction parameter is treated under Hartree-Fock mean field scheme. From our analysis we find that selective transmission can be obtained by tuning injecting electron energy, and thus, the present model can be utilized as a controlled switching device.
Ellison, Aaron M.; Jackson, Scott
2015-01-01
Herpetologists and conservation biologists frequently use convenient and cost-effective, but less accurate, abundance indices (e.g., number of individuals collected under artificial cover boards or during natural objects surveys) in lieu of more accurate, but costly and destructive, population size estimators to detect and monitor size, state, and trends of amphibian populations. Although there are advantages and disadvantages to each approach, reliable use of abundance indices requires that they be calibrated with accurate population estimators. Such calibrations, however, are rare. The red back salamander, Plethodon cinereus, is an ecologically useful indicator species of forest dynamics, and accurate calibration of indices of salamander abundance could increase the reliability of abundance indices used in monitoring programs. We calibrated abundance indices derived from surveys of P. cinereus under artificial cover boards or natural objects with a more accurate estimator of their population size in a New England forest. Average densities/m2 and capture probabilities of P. cinereus under natural objects or cover boards in independent, replicate sites at the Harvard Forest (Petersham, Massachusetts, USA) were similar in stands dominated by Tsuga canadensis (eastern hemlock) and deciduous hardwood species (predominantly Quercus rubra [red oak] and Acer rubrum [red maple]). The abundance index based on salamanders surveyed under natural objects was significantly associated with density estimates of P. cinereus derived from depletion (removal) surveys, but underestimated true density by 50%. In contrast, the abundance index based on cover-board surveys overestimated true density by a factor of 8 and the association between the cover-board index and the density estimates was not statistically significant. We conclude that when calibrated and used appropriately, some abundance indices may provide cost-effective and reliable measures of P. cinereus abundance that could be used in conservation assessments and long-term monitoring at Harvard Forest and other northeastern USA forests. PMID:26020008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearson, Walter H.; Skalski, J R.; Sobocinski, Kathryn L.
2006-02-01
Ship wakes produced by deep-draft vessels transiting the lower Columbia River have been observed to cause stranding of juvenile salmon. Proposed deepening of the Columbia River navigation channel has raised concerns about the potential impact of the deepening project on juvenile salmon stranding. The Portland District of the U.S. Army Corps of Engineers requested that the Pacific Northwest National Laboratory design and conduct a study to assess stranding impacts that may be associated with channel deepening. The basic study design was a multivariate analysis of covariance of field observations and measurements under a statistical design for a before and aftermore » impact comparison. We have summarized field activities and statistical analyses for the ?before? component of the study here. Stranding occurred at all three sampling sites and during all three sampling seasons (Summer 2004, Winter 2005, and Spring 2005), for a total of 46 stranding events during 126 observed vessel passages. The highest occurrence of stranding occurred at Barlow Point, WA, where 53% of the observed events resulted in stranding. Other sites included Sauvie Island, OR (37%) and County Line Park, WA (15%). To develop an appropriate impact assessment model that accounted for relevant covariates, regression analyses were conducted to determine the relationships between stranding probability and other factors. Nineteen independent variables were considered as potential factors affecting the incidence of juvenile salmon stranding, including tidal stage, tidal height, river flow, current velocity, ship type, ship direction, ship condition (loaded/unloaded), ship speed, ship size, and a proxy variable for ship kinetic energy. In addition to the ambient and ship characteristics listed above, site, season, and fish density were also considered. Although no single factor appears as the primary factor for stranding, statistical analyses of the covariates resulted in the following equations: (1) Stranding Probability {approx} Location + Kinetic Energy Proxy + Tidal Height + Salmonid Density + Kinetic energy proxy ? Tidal Height + Tidal Height x Salmonid Density. (2) Stranding Probability {approx} Location + Total Wave Distance + Salmonid Density Index. (3) Log(Total Wave Height) {approx} Ship Block + Tidal Height + Location + Ship Speed. (4) Log(Total Wave Excursion Across the Beach) {approx} Location + Kinetic Energy Proxy + Tidal Height The above equations form the basis for a conceptual model of the factors leading to salmon stranding. The equations also form the basis for an approach for assessing impacts of dredging under the before/after study design.« less
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
ERIC Educational Resources Information Center
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
ERIC Educational Resources Information Center
van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.
2016-01-01
The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…
Helble, Tyler A; D'Spain, Gerald L; Hildebrand, John A; Campbell, Gregory S; Campbell, Richard L; Heaney, Kevin D
2013-09-01
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. A common mistake in the analysis of marine mammal acoustic data is formulating conclusions about these animals without first understanding how environmental properties such as bathymetry, sediment properties, water column sound speed, and ocean acoustic noise influence the detection and character of vocalizations in the acoustic data. The approach in this paper is to use Monte Carlo simulations with a full wave field acoustic propagation model to characterize the site specific probability of detection of six types of humpback whale calls at three passive acoustic monitoring locations off the California coast. Results show that the probability of detection can vary by factors greater than ten when comparing detections across locations, or comparing detections at the same location over time, due to environmental effects. Effects of uncertainties in the inputs to the propagation model are also quantified, and the model accuracy is assessed by comparing calling statistics amassed from 24,690 humpback units recorded in the month of October 2008. Under certain conditions, the probability of detection can be estimated with uncertainties sufficiently small to allow for accurate density estimates.
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
A Riemannian framework for orientation distribution function computing.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2009-01-01
Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.
Solar wind and extreme ultraviolet modulation of the lunar ionosphere/exosphere
NASA Technical Reports Server (NTRS)
Freeman, J. W.
1976-01-01
The ALSEP/SIDE detectors routinely monitor the dayside lunar ionosphere. Variations in the ionosphere are found to correlate with both the 2800 MHz radio index which can be related to solar EUV and with the solar wind proton flux. For the solar wind, the ionospheric variation is proportionately greater than that of the solar wind. This suggests an amplification effect on the lunar atmosphere due perhaps to sputtering of the surface or, less probably, an inordinate enhancement of noble gases in the solar wind. The surface neutral number density is calculated under the assumption of neon gas. During a quiet solar wind this number agrees with or is slightly above that expected for neon accreted from the solar wind. During an enhanced solar wind the neutral number density is much higher.
Significance of stress transfer in time-dependent earthquake probability calculations
Parsons, T.
2005-01-01
A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
Density probability distribution functions of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2008-10-01
In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components
NASA Technical Reports Server (NTRS)
1991-01-01
This annual report summarizes the work completed during the third year of technical effort on the referenced contract. Principal developments continue to focus on the Probabilistic Finite Element Method (PFEM) which has been under development for three years. Essentially all of the linear capabilities within the PFEM code are in place. Major progress in the application or verifications phase was achieved. An EXPERT module architecture was designed and partially implemented. EXPERT is a user interface module which incorporates an expert system shell for the implementation of a rule-based interface utilizing the experience and expertise of the user community. The Fast Probability Integration (FPI) Algorithm continues to demonstrate outstanding performance characteristics for the integration of probability density functions for multiple variables. Additionally, an enhanced Monte Carlo simulation algorithm was developed and demonstrated for a variety of numerical strategies.
Criticality and Phase Transition in Stock-Price Fluctuations
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2006-02-01
We analyze the behavior of the U.S. S&P 500 index from 1984 to 1995, and characterize the non-Gaussian probability density functions (PDF) of the log returns. The temporal dependence of fat tails in the PDF of a ten-minute log return shows a gradual, systematic increase in the probability of the appearance of large increments on approaching black Monday in October 1987, reminiscent of parameter tuning towards criticality. On the occurrence of the black Monday crash, this culminates in an abrupt transition of the scale dependence of the non-Gaussian PDF towards scale-invariance characteristic of critical behavior. These facts suggest the need for revisiting the turbulent cascade paradigm recently proposed for modeling the underlying dynamics of the financial index, to account for time varying—phase transitionlike and scale invariant-critical-like behavior.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
Kuparinen, Anna; Stenseth, Nils Christian; Hutchings, Jeffrey A
2014-12-01
The evolution of life histories over contemporary time scales will almost certainly affect population demography. One important pathway for such eco-evolutionary interactions is the density-dependent regulation of population dynamics. Here, we investigate how fisheries-induced evolution (FIE) might alter density-dependent population-productivity relationships. To this end, we simulate the eco-evolutionary dynamics of an Atlantic cod (Gadus morhua) population under fishing, followed by a period of recovery in the absence of fishing. FIE is associated with increases in juvenile production, the ratio of juveniles to mature population biomass, and the ratio of the mature population biomass relative to the total population biomass. In contrast, net reproductive rate (R 0 ) and per capita population growth rate (r) decline concomitantly with evolution. Our findings suggest that FIE can substantially modify the fundamental population-productivity relationships that underlie density-dependent population regulation and that form the primary population-dynamical basis for fisheries stock-assessment projections. From a conservation and fisheries-rebuilding perspective, we find that FIE reduces R 0 and r, the two fundamental correlates of population recovery ability and inversely extinction probability.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.
Distribution patterns of microplastics within the plankton of a tropical estuary.
Lima, A R A; Costa, M F; Barletta, M
2014-07-01
The Goiana Estuary was studied regarding the seasonal and spatial variations of microplastics (<5mm) and their quantification relative to the zooplankton. The total density (n 100 m(-3)) of microplastics represented half of the total fish larvae density and was comparable to fish eggs density. Soft, hard plastics, threads and paint chips were found in the samples (n=216). Their origins are probably the river basin, the sea and fisheries (including the lobster fleet). In some occasions, the amount of microplastics surpassed that of Ichthyoplankton. The highest amount of microplastics was observed during the late rainy season, when the environment is under influence of the highest river flow, which induces the runoff of plastic fragments to the lower estuary. The density of microplastics in the water column will determine their bioavailability to planktivorous organisms, and then to larger predators, possibly promoting the transfer of microplastic between trophic levels. These findings are important for better informing researchers in future works and as basic information for managerial actions. Copyright © 2014 Elsevier Inc. All rights reserved.
RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection
Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.
2015-01-01
Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112
Fractional Brownian motion with a reflecting wall.
Wada, Alexander H O; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
How the climate limits the wood density of angiosperms
NASA Astrophysics Data System (ADS)
Choi, Jin Woo; Kim, Ho-Young
2017-11-01
Flowering trees have various types of wood structure to perform multiple functions under their environmental conditions. In addition to transporting water from the roots to the canopy and providing mechanical support, the structure should provide resistance to embolism to maintain soil-plant-atmosphere continuum. By investigating existing data of the resistivity to embolism and wood density of 165 angiosperm species, here we show that the climate can limit the intrinsic properties of trees. Trees living in the dry environments require a high wood density to slow down the pressure decrease as it loses water relatively fast by evaporation. However, building too much tissues will result in the decrease of hydraulic conductivity and moisture concentration around mesophyll cells. To rationalize the biologically observed lower bound of the wood density, we construct a mechanical model to predict the wood density as a function of the vulnerability to embolism and the time for the recovery. Also, we build an artificial system using hydrogel microchannels that can test the probability of embolism as a function of conduit distributions. Our theoretical prediction is shown to be consistent with the results obtained from the artificial system and the biological data.
Shear coaxial injector atomization phenomena for combusting and non-combusting conditions
NASA Technical Reports Server (NTRS)
Pal, S.; Moser, M. D.; Ryan, H. M.; Foust, M. J.; Santoro, R. J.
1992-01-01
Measurements of LOX drop size and velocity in a uni-element liquid propellant rocket chamber are presented. The use of the Phase Doppler Particle Analyzer in obtaining temporally-averaged probability density functions of drop size in a harsh rocket environment has been demonstrated. Complementary measurements of drop size/velocity for simulants under cold flow conditions are also presented. The drop size/velocity measurements made for combusting and cold flow conditions are compared, and the results indicate that there are significant differences in the two flowfields.
Hyperons in the nuclear pasta phase
NASA Astrophysics Data System (ADS)
Menezes, Débora P.; Providência, Constança
2017-10-01
We have investigated under which conditions hyperons (particularly Λ s and Σ-s ) can be found in the nuclear pasta phase. As the density and temperature are larger and the electron fraction is smaller, the probability is greater that these particles appear, but always in very small amounts. Λ hyperons only occur in gas and in smaller amounts than would occur if matter were homogeneous, never with abundancies above 10-5. The amount of Σ- in the gas is at least two orders of magnitude smaller and can be disregarded in practical calculations.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
NASA Astrophysics Data System (ADS)
Qian, H.
2015-07-01
Unbalanced probability circulation, which yields cyclic motions in phase space, is the defining characteristics of a stationary diffusion process without detailed balance. In over-damped soft matter systems, such behavior is a hallmark of the presence of a sustained external driving force accompanied with dissipations. In an under-damped and strongly correlated system, however, cyclic motions are often the consequences of a conservative dynamics. In the present paper, we give a novel interpretation of a class of diffusion processes with stationary circulation in terms of a Maxwell-Boltzmann equilibrium in which cyclic motions are on the level set of stationary probability density function thus non-dissipative, e.g., a supercurrent. This implies an orthogonality between stationary circulation J ss ( x) and the gradient of stationary probability density f ss ( x) > 0. A sufficient and necessary condition for the orthogonality is a decomposition of the drift b( x) = j( x) + D( x)∇φ( x) where ∇ṡ j( x) = 0 and j( x) ṡ∇φ( x) = 0. Stationary processes with such Maxwell-Boltzmann equilibrium has an underlying conservative dynamics , and a first integral ϕ( x) ≡ -ln f ss (x) = const, akin to a Hamiltonian system. At all time, an instantaneous free energy balance equation exists for a given diffusion system; and an extended energy conservation law among an entire family of diffusion processes with different parameter α can be established via a Helmholtz theorem. For the general diffusion process without the orthogonality, a nonequilibrium cycle emerges, which consists of external driven φ-ascending steps and spontaneous φ-descending movements, alternated with iso-φ motions. The theory presented here provides a rich mathematical narrative for complex mesoscopic dynamics, with contradistinction to an earlier one [H. Qian et al., J. Stat. Phys. 107, 1129 (2002)]. This article is supplemented with comments by H. Ouerdane and a final reply by the author.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.
2011-01-01
Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne
2011-01-01
Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Korman, Josh; Yard, Mike
2017-01-01
Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.
Wavefronts, actions and caustics determined by the probability density of an Airy beam
NASA Astrophysics Data System (ADS)
Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón
2018-07-01
The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
Detection limit for rate fluctuations in inhomogeneous Poisson processes
NASA Astrophysics Data System (ADS)
Shintani, Toshiaki; Shinomoto, Shigeru
2012-04-01
Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.
Detection limit for rate fluctuations in inhomogeneous Poisson processes.
Shintani, Toshiaki; Shinomoto, Shigeru
2012-04-01
Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
Versino, Daniele; Bronkhorst, Curt Allan
2018-01-31
The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less
Oak regeneration and overstory density in the Missouri Ozarks
David R. Larsen; Monte A. Metzger
1997-01-01
Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...
Producibility of fibrous refractory composite insulation, FRCI 40-20. [for reusable heat shielding
NASA Technical Reports Server (NTRS)
Strauss, E. L.; Johnson, C. W.; Graese, R. W.; Campbell, R. L.
1983-01-01
Fibrous Refractory Composite Insulation (FRCI) is a NASA-developed, second generation, reusable heat-shield material that comprises a mixture of aluminoborosilicate fibers, silica fibers, and silicon carbide. Under NASA contract, a program was conducted to demonstrate the capability for manufacturing FRCI 40-20 billets. A detailed fabrication procedure was written and validated by testing specimens from the first two billets. The material conformed to NASA requirements for density, tensile strength, modulus of rupture, thermal expansion, cristobalite content, and uniformity. Twenty-four billets were prepared to provide 20 deliverable articles. Production billets were checked for density, modulus of rupture, cristobalite content, and uniformity. Billet density ranged from 309.48 to 332.22 kg/cu m (19.32 to 20.74 lb/cu ft) and modulus of rupture from 4690 to 10,140 kPa (680 to 1470 psi). Cristobalite content was less than 1 percent. A Weibull analysis of modulus-of-rupture data indicated a 1.5 percent probability for failure below the specified strength of 4480 kPa (650 psi).
Global asymptotic stability of plant-seed bank models.
Eager, Eric Alan; Rebarber, Richard; Tenhumberg, Brigitte
2014-07-01
Many plant populations have persistent seed banks, which consist of viable seeds that remain dormant in the soil for many years. Seed banks are important for plant population dynamics because they buffer against environmental perturbations and reduce the probability of extinction. Viability of the seeds in the seed bank can depend on the seed's age, hence it is important to keep track of the age distribution of seeds in the seed bank. In this paper we construct a general density-dependent plant-seed bank model where the seed bank is age-structured. We consider density dependence in both seedling establishment and seed production, since previous work has highlighted that overcrowding can suppress both of these processes. Under certain assumptions on the density dependence, we prove that there is a globally stable equilibrium population vector which is independent of the initial state. We derive an analytical formula for the equilibrium population using methods from feedback control theory. We apply these results to a model for the plant species Cirsium palustre and its seed bank.
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Propensity, Probability, and Quantum Theory
NASA Astrophysics Data System (ADS)
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Laboratory-Tutorial Activities for Teaching Probability
ERIC Educational Resources Information Center
Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.
2006-01-01
We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…
Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA
Yarra, Allyson N.; Magoulick, Daniel D.
2018-01-01
Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Committor of elementary reactions on multistate systems
NASA Astrophysics Data System (ADS)
Király, Péter; Kiss, Dóra Judit; Tóth, Gergely
2018-04-01
In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference.
Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing
2016-01-01
In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications.
Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference
Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing
2016-01-01
In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications. PMID:27851817
Murn, Campbell; Holloway, Graham J
2016-10-01
Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
NASA Astrophysics Data System (ADS)
Mishra, D. C.; Arora, K.; Tiwari, V. M.
2004-02-01
A combined gravity map over the Indian Peninsular Shield (IPS) and adjoining oceans brings out well the inter-relationships between the older tectonic features of the continent and the adjoining younger oceanic features. The NW-SE, NE-SW and N-S Precambrian trends of the IPS are reflected in the structural trends of the Arabian Sea and the Bay of Bengal suggesting their probable reactivation. The Simple Bouguer anomaly map shows consistent increase in gravity value from the continent to the deep ocean basins, which is attributed to isostatic compensation due to variations in the crustal thickness. A crustal density model computed along a profile across this region suggests a thick crust of 35-40 km under the continent, which reduces to 22/20-24 km under the Bay of Bengal with thick sediments of 8-10 km underlain by crustal layers of density 2720 and 2900/2840 kg/m 3. Large crustal thickness and trends of the gravity anomalies may suggest a transitional crust in the Bay of Bengal up to 150-200 km from the east coast. The crustal thickness under the Laxmi ridge and east of it in the Arabian Sea is 20 and 14 km, respectively, with 5-6 km thick Tertiary and Mesozoic sediments separated by a thin layer of Deccan Trap. Crustal layers of densities 2750 and 2950 kg/m 3 underlie sediments. The crustal density model in this part of the Arabian Sea (east of Laxmi ridge) and the structural trends similar to the Indian Peninsular Shield suggest a continent-ocean transitional crust (COTC). The COTC may represent down dropped and submerged parts of the Indian crust evolved at the time of break-up along the west coast of India and passage of Reunion hotspot over India during late Cretaceous. The crustal model under this part also shows an underplated lower crust and a low density upper mantle, extending over the continent across the west coast of India, which appears to be related to the Deccan volcanism. The crustal thickness under the western Arabian Sea (west of the Laxmi ridge) reduces to 8-9 km with crustal layers of densities 2650 and 2870 kg/m 3 representing an oceanic crust.
Fuzzy-logic detection and probability of hail exploiting short-range X-band weather radar
NASA Astrophysics Data System (ADS)
Capozzi, Vincenzo; Picciotti, Errico; Mazzarella, Vincenzo; Marzano, Frank Silvio; Budillon, Giorgio
2018-03-01
This work proposes a new method for hail precipitation detection and probability, based on single-polarization X-band radar measurements. Using a dataset consisting of reflectivity volumes, ground truth observations and atmospheric sounding data, a probability of hail index, which provides a simple estimate of the hail potential, has been trained and adapted within Naples metropolitan environment study area. The probability of hail has been calculated starting by four different hail detection methods. The first two, based on (1) reflectivity data and temperature measurements and (2) on vertically-integrated liquid density product, respectively, have been selected from the available literature. The other two techniques are based on combined criteria of the above mentioned methods: the first one (3) is based on the linear discriminant analysis, whereas the other one (4) relies on the fuzzy-logic approach. The latter is an innovative criterion based on a fuzzyfication step performed through ramp membership functions. The performances of the four methods have been tested using an independent dataset: the results highlight that the fuzzy-oriented combined method performs slightly better in terms of false alarm ratio, critical success index and area under the relative operating characteristic. An example of application of the proposed hail detection and probability products is also presented for a relevant hail event, occurred on 21 July 2014.
Can we estimate molluscan abundance and biomass on the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.
2017-11-01
Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.
NASA Astrophysics Data System (ADS)
Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.
2018-04-01
We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.
Interaction of cw CO2 laser radiation with plasma near-metallic substrate surface
NASA Astrophysics Data System (ADS)
Azharonok, V. V.; Astapchik, S. A.; Zabelin, Alexandre M.; Golubev, Vladimir S.; Golubev, V. S.; Grezev, A. N.; Filatov, Igor V.; Chubrik, N. I.; Shimanovich, V. D.
2000-07-01
Optical and spectroscopic methods were used in studying near-surface plasma that is formed under the effect CW CO2 laser of (2- 5)x106W/cm2 power density upon stainless steel in He and Ar shielding gases. The variation of plume spatial structure with time has been studied, the outflow of gas-vapor jets from the interaction area has been characterized. The spectra of plasma plume pulsations have been obtained for the frequency range Δf = 0-1 MHz. The temperature and electron concentration of plasma plume have been found under radiation effect upon the target of stainless steel. Consideration has been given to the most probable mechanisms of CW laser radiation-metal non-stationary interaction.
NASA Astrophysics Data System (ADS)
Mouchtouris, S.; Kokkoris, G.
2018-01-01
A generalized equation for the electron energy probability function (EEPF) of inductively coupled Ar plasmas is proposed under conditions of nonlocal electron kinetics and diffusive cooling. The proposed equation describes the local EEPF in a discharge and the independent variable is the kinetic energy of electrons. The EEPF consists of a bulk and a depleted tail part and incorporates the effect of the plasma potential, Vp, and pressure. Due to diffusive cooling, the break point of the EEPF is eVp. The pressure alters the shape of the bulk and the slope of the tail part. The parameters of the proposed EEPF are extracted by fitting to measure EEPFs (at one point in the reactor) at different pressures. By coupling the proposed EEPF with a hybrid plasma model, measurements in the gaseous electronics conference reference reactor concerning (a) the electron density and temperature and the plasma potential, either spatially resolved or at different pressure (10-50 mTorr) and power, and (b) the ion current density of the electrode, are well reproduced. The effect of the choice of the EEPF on the results is investigated by a comparison to an EEPF coming from the Boltzmann equation (local electron kinetics approach) and to a Maxwellian EEPF. The accuracy of the results and the fact that the proposed EEPF is predefined renders its use a reliable alternative with a low computational cost compared to stochastic electron kinetic models at low pressure conditions, which can be extended to other gases and/or different electron heating mechanisms.
Coherent Forward Broadening in Cold Atom Clouds
NASA Astrophysics Data System (ADS)
Sutherland, R. T.; Robicheaux, Francis
2016-05-01
It is shown that homogeneous line-broadening in a diffuse cold atom cloud is proportional to the resonant optical depth of the cloud. Further, it is demonstrated how the strong directionality of the coherent interactions causes the cloud's spectra to depend strongly on its shape, even when the cloud is held at constant densities. These two numerical observations can be predicted analytically by extending the single photon wavefunction model. Lastly, elongating a cloud along the line of laser propagation causes the excitation probability distribution to deviate from the exponential decay predicted by the Beer-Lambert law to the extent where the atoms in the back of the cloud are more excited than the atoms in the front. These calculations are conducted at low densities relevant to recent experiments. This work was supported by the National Science Foundation under Grant No. 1404419-PHY.
Stochastic analysis of a pulse-type prey-predator model
NASA Astrophysics Data System (ADS)
Wu, Y.; Zhu, W. Q.
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Delay-induced stochastic bifurcations in a bistable system under white noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Zhongkui, E-mail: sunzk@nwpu.edu.cn; Fu, Jin; Xu, Wei
2015-08-15
In this paper, the effects of noise and time delay on stochastic bifurcations are investigated theoretically and numerically in a time-delayed Duffing-Van der Pol oscillator subjected to white noise. Due to the time delay, the random response is not Markovian. Thereby, approximate methods have been adopted to obtain the Fokker-Planck-Kolmogorov equation and the stationary probability density function for amplitude of the response. Based on the knowledge that stochastic bifurcation is characterized by the qualitative properties of the steady-state probability distribution, it is found that time delay and feedback intensity as well as noise intensity will induce the appearance of stochasticmore » P-bifurcation. Besides, results demonstrated that the effects of the strength of the delayed displacement feedback on stochastic bifurcation are accompanied by the sensitive dependence on time delay. Furthermore, the results from numerical simulations best confirm the effectiveness of the theoretical analyses.« less
Stochastic analysis of a pulse-type prey-predator model.
Wu, Y; Zhu, W Q
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations
NASA Astrophysics Data System (ADS)
Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.
2014-02-01
The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.
NASA Astrophysics Data System (ADS)
Wellons, Sarah; Torrey, Paul
2017-06-01
Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.
Low probability of a dilution effect for Lyme borreliosis in Belgian forests.
Ruyts, Sanne C; Landuyt, Dries; Ampoorter, Evy; Heylen, Dieter; Ehrmann, Steffen; Coipan, Elena C; Matthysen, Erik; Sprong, Hein; Verheyen, Kris
2018-04-22
An increasing number of studies have investigated the consequences of biodiversity loss for the occurrence of vector-borne diseases such as Lyme borreliosis, the most common tick-borne disease in the northern hemisphere. As host species differ in their ability to transmit the Lyme borreliosis bacteria Borrelia burgdorferi s.l. to ticks, increased host diversity can decrease disease prevalence by increasing the proportion of dilution hosts, host species that transmit pathogens less efficiently. Previous research shows that Lyme borreliosis risk differs between forest types and suggests that a higher diversity of host species might dilute the contribution of small rodents to infect ticks with B. afzelii, a common Borrelia genospecies. However, empirical evidence for a dilution effect in Europe is largely lacking. We tested the dilution effect hypothesis in 19 Belgian forest stands of different forest types along a diversity gradient. We used empirical data and a Bayesian belief network to investigate the impact of the proportion of dilution hosts on the density of ticks infected with B. afzelii, and identified the key drivers determining the density of infected ticks, which is a measure of human infection risk. Densities of ticks and B. afzelii infection prevalence differed between forest types, but the model indicated that the density of infected ticks is hardly affected by dilution. The most important variables explaining variability in disease risk were related to the density of ticks. Combining empirical data with a model-based approach supported decision making to reduce tick-borne disease risk. We found a low probability of a dilution effect for Lyme borreliosis in a north-western European context. We emphasize that under these circumstances, Lyme borreliosis prevention should rather aim at reducing tick-human contact rate instead of attempting to increase the proportion of dilution hosts. Copyright © 2018. Published by Elsevier GmbH.
Radiative transition of hydrogen-like ions in quantum plasma
NASA Astrophysics Data System (ADS)
Hu, Hongwei; Chen, Zhanbin; Chen, Wencong
2016-12-01
At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
Körbahti, Bahadır K; Taşyürek, Selin
2015-03-01
Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.
Epidemics in interconnected small-world networks.
Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong
2015-01-01
Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Horn, J. E.; Harter, T.
2011-06-01
Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
Predictions of malaria vector distribution in Belize based on multispectral satellite data.
Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J
1996-03-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Predictions of malaria vector distribution in Belize based on multispectral satellite data
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.
1996-01-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation
Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.
1998-01-01
We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.
Duerr, Adam E.; Miller, Tricia A.; Cornell Duerr, Kerri L; Lanzone, Michael J.; Fesnock, Amy; Katzner, Todd E.
2015-01-01
Anthropogenic development has great potential to affect fragile desert environments. Large-scale development of renewable energy infrastructure is planned for many desert ecosystems. Development plans should account for anthropogenic effects to distributions and abundance of rare or sensitive wildlife; however, baseline data on abundance and distribution of such wildlife are often lacking. We surveyed for predatory birds in the Sonoran and Mojave Deserts of southern California, USA, in an area designated for protection under the “Desert Renewable Energy Conservation Plan”, to determine how these birds are distributed across the landscape and how this distribution is affected by existing development. We developed species-specific models of resight probability to adjust estimates of abundance and density of each individual common species. Second, we developed combined-species models of resight probability for common and rare species so that we could make use of sparse data on the latter. We determined that many common species, such as red-tailed hawks, loggerhead shrikes, and especially common ravens, are associated with human development and likely subsidized by human activity. Species-specific and combined-species models of resight probability performed similarly, although the former model type provided higher quality information. Comparing abundance estimates with past surveys in the Mojave Desert suggests numbers of predatory birds associated with human development have increased while other sensitive species not associated with development have decreased. This approach gave us information beyond what we would have collected by focusing either on common or rare species, thus it provides a low-cost framework for others conducting surveys in similar desert environments outside of California.
Yin, Shi; Bernstein, Elliot R
2017-10-05
Iron sulfur cluster anions (FeS) m - (m = 2-8) are studied by photoelectron spectroscopy (PES) at 3.492 eV (355 nm) and 4.661 eV (266 nm) photon energies, and by density functional theory (DFT) calculations. The most probable structures and ground state spin multiplicities for (FeS) m - (m = 2-8) clusters are tentatively assigned through a comparison of their theoretical and experiment first vertical detachment energy (VDE) values. Many spin states lie within 0.5 eV of the ground spin state for the larger (FeS) m - (m ≥ 4) clusters. Theoretical VDEs of these low lying spin states are in good agreement with the experimental VDE values. Therefore, multiple spin states of each of these iron sulfur cluster anions probably coexist under the current experimental conditions. Such available multiple spin states must be considered when evaluating the properties and behavior of these iron sulfur clusters in real chemical and biological systems. The experimental first VDEs of (FeS) m - (m = 1-8) clusters are observed to change with the cluster size (number m). The first VDE trends noted can be related to the different properties of the highest singly occupied molecular orbitals (NBO, HSOMOs) of each cluster anion. The changing nature of the NBO/HSOMO of these (FeS) m - (m = 1-8) clusters from a p orbital on S, to a d orbital on Fe, and to an Fe-Fe bonding orbital is probably responsible for the observed increasing trend for their first VDEs with respect to m.
Generalised Sandpile Dynamics on Artificial and Real-World Directed Networks
Zachariou, Nicky; Expert, Paul; Takayasu, Misako; Christensen, Kim
2015-01-01
The main finding of this paper is a novel avalanche-size exponent τ ≈ 1.87 when the generalised sandpile dynamics evolves on the real-world Japanese inter-firm network. The topology of this network is non-layered and directed, displaying the typical bow tie structure found in real-world directed networks, with cycles and triangles. We show that one can move from a strictly layered regular lattice to a more fluid structure of the inter-firm network in a few simple steps. Relaxing the regular lattice structure by introducing an interlayer distribution for the interactions, forces the scaling exponent of the avalanche-size probability density function τ out of the two-dimensional directed sandpile universality class τ = 4/3, into the mean field universality class τ = 3/2. Numerical investigation shows that these two classes are the only that exist on the directed sandpile, regardless of the underlying topology, as long as it is strictly layered. Randomly adding a small proportion of links connecting non adjacent layers in an otherwise layered network takes the system out of the mean field regime to produce non-trivial avalanche-size probability density function. Although these do not display proper scaling, they closely reproduce the behaviour observed on the Japanese inter-firm network. PMID:26606143
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Directed Research in Bone Discipline: Refining Previous Research Observations for Space Medicine
NASA Technical Reports Server (NTRS)
Sibonga, Jean D.
2015-01-01
Dual-energy X-ray absorptiometry bone mass density, as a sole index, is an insufficient surrogate for fracture; Clinical Practice Guidelines using bone mass density (both World Health Organization and FRAX) are not specific for complicated subjects such as young, healthy persons following prolonged exposure to skeletal unloading (i.e. an attribute of spaceflight); Research data suggest that spaceflight induces changes to astronaut bones that could be profound, possibly irreversible and unlike age-related bone loss on Earth.; There is a need to objectively assess factors across human physiology that are also influenced by spaceflight (e.g., muscle) that contribute to fracture risk. Some of these objective assessments may require innovative technologies, analyses and modeling.; Astronauts are also exposed to novel situations that may overload their bones highlighting a need integrate biomechanics of physical activities into risk assessments.; As we accumulate data, which reflects the biomechanical competence of bone under specific mechanically-loaded scenarios (even activities of daily living), BONE expects Bone Fracture Module to be more sensitive and/or have less uncertainty in its assessments of fracture probability.; Fracture probability drives the requirement for countermeasures. Level of evidence will unlikely be obtained; hence, the Bone Research and Clinical Advisory Panel (like a Data Safety Monitoring Board) will provide the recommendations.
Spatially structured superinfection and the evolution of disease virulence.
Caraco, Thomas; Glavanakov, Stephan; Li, Shengua; Maniatty, William; Szymanski, Boleslaw K
2006-06-01
When pathogen strains differing in virulence compete for hosts, spatial structuring of disease transmission can govern both evolved levels of virulence and patterns in strain coexistence. We develop a spatially detailed model of superinfection, a form of contest competition between pathogen strains; the probability of superinfection depends explicitly on the difference in levels of virulence. We apply methods of adaptive dynamics to address the interplay of spatial dynamics and evolution. The mean-field approximation predicts evolution to criticality; any small increase in virulence capable of dynamical persistence is favored. Both pair approximation and simulation of the detailed model indicate that spatial structure constrains disease virulence. Increased spatial clustering reduces the maximal virulence capable of single-strain persistence and, more importantly, reduces the convergent-stable virulence level under strain competition. The spatially detailed model predicts that increasing the probability of superinfection, for given difference in virulence, increases the likelihood of between-strain coexistence. When strains differing in virulence can coexist ecologically, our results may suggest policies for managing diseases with localized transmission. Comparing equilibrium densities from the pair approximation, we find that introducing a more virulent strain into a host population infected by a less virulent strain can sometimes reduce total host mortality and increase global host density.
Generalised Sandpile Dynamics on Artificial and Real-World Directed Networks.
Zachariou, Nicky; Expert, Paul; Takayasu, Misako; Christensen, Kim
2015-01-01
The main finding of this paper is a novel avalanche-size exponent τ ≈ 1.87 when the generalised sandpile dynamics evolves on the real-world Japanese inter-firm network. The topology of this network is non-layered and directed, displaying the typical bow tie structure found in real-world directed networks, with cycles and triangles. We show that one can move from a strictly layered regular lattice to a more fluid structure of the inter-firm network in a few simple steps. Relaxing the regular lattice structure by introducing an interlayer distribution for the interactions, forces the scaling exponent of the avalanche-size probability density function τ out of the two-dimensional directed sandpile universality class τ = 4/3, into the mean field universality class τ = 3/2. Numerical investigation shows that these two classes are the only that exist on the directed sandpile, regardless of the underlying topology, as long as it is strictly layered. Randomly adding a small proportion of links connecting non adjacent layers in an otherwise layered network takes the system out of the mean field regime to produce non-trivial avalanche-size probability density function. Although these do not display proper scaling, they closely reproduce the behaviour observed on the Japanese inter-firm network.
Olea, R.A.; Houseknecht, D.W.; Garrity, C.P.; Cook, T.A.
2011-01-01
Shale gas is a form of continuous unconventional hydrocarbon accumulation whose resource estimation is unfeasible through the inference of pore volume. Under these circumstances, the usual approach is to base the assessment on well productivity through estimated ultimate recovery (EUR). Unconventional resource assessments that consider uncertainty are typically done by applying analytical procedures based on classical statistics theory that ignores geographical location, does not take into account spatial correlation, and assumes independence of EUR from other variables that may enter into the modeling. We formulate a new, more comprehensive approach based on sequential simulation to test methodologies known to be capable of more fully utilizing the data and overcoming unrealistic simplifications. Theoretical requirements demand modeling of EUR as areal density instead of well EUR. The new experimental methodology is illustrated by evaluating a gas play in the Woodford Shale in the Arkoma Basin of Oklahoma. Differently from previous assessments, we used net thickness and vitrinite reflectance as secondary variables correlated to cell EUR. In addition to the traditional probability distribution for undiscovered resources, the new methodology provides maps of EUR density and maps with probabilities to reach any given cell EUR, which are useful to visualize geographical variations in prospectivity.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
Muscle categorization using PDF estimation and Naive Bayes classification.
Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W
2012-01-01
The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.
Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Licatta, Angelo; Griffin, Devon
2007-01-01
The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.
Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Griffin, Devon
2008-01-01
The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo
2018-01-01
Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank
2015-04-01
Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Söhlingen gas field; and (3) the Mw 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly "human induced," "not even human triggered," and a third case in between both extremes.
NASA Astrophysics Data System (ADS)
Quinn, Kevin Martin
The total amount of precipitation integrated across a precipitation cluster (contiguous precipitating grid cells exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance, expressed as the rate of water mass lost or latent heat released, i.e. the power of the disturbance. Probability distributions of cluster power are examined during boreal summer (May-September) and winter (January-March) using satellite-retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) 3B42 and Special Sensor Microwave Imager and Sounder (SSM/I and SSMIS) programs, model output from the High Resolution Atmospheric Model (HIRAM, roughly 0.25-0.5 0 resolution), seven 1-2° resolution members of the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiment, and National Center for Atmospheric Research Large Ensemble (NCAR LENS). Spatial distributions of precipitation-weighted centroids are also investigated in observations (TRMM-3B42) and climate models during winter as a metric for changes in mid-latitude storm tracks. Observed probability distributions for both seasons are scale-free from the smallest clusters up to a cutoff scale at high cluster power, after which the probability density drops rapidly. When low rain rates are excluded by choosing a minimum rain rate threshold in defining clusters, the models accurately reproduce observed cluster power statistics and winter storm tracks. Changes in behavior in the tail of the distribution, above the cutoff, are important for impacts since these quantify the frequency of the most powerful storms. End-of-century cluster power distributions and storm track locations are investigated in these models under a "business as usual" global warming scenario. The probability of high cluster power events increases by end-of-century across all models, by up to an order of magnitude for the highest-power events for which statistics can be computed. For the three models in the suite with continuous time series of high resolution output, there is substantial variability on when these probability increases for the most powerful precipitation clusters become detectable, ranging from detectable within the observational period to statistically significant trends emerging only after 2050. A similar analysis of National Centers for Environmental Prediction (NCEP) Reanalysis 2 and SSM/I-SSMIS rain rate retrievals in the recent observational record does not yield reliable evidence of trends in high-power cluster probabilities at this time. Large impacts to mid-latitude storm tracks are projected over the West Coast and eastern North America, with no less than 8 of the 9 models examined showing large increases by end-of-century in the probability density of the most powerful storms, ranging up to a factor of 6.5 in the highest range bin for which historical statistics are computed. However, within these regional domains, there is considerable variation among models in pinpointing exactly where the largest increases will occur.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Stochastic transport models for mixing in variable-density turbulence
NASA Astrophysics Data System (ADS)
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
NASA Astrophysics Data System (ADS)
Zengmei, L.; Guanghua, Q.; Zishen, C.
2015-05-01
The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.
Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun
2013-01-01
Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.
Controlling the Shannon Entropy of Quantum Systems
Xing, Yifan; Wu, Jun
2013-01-01
This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking. PMID:23818819
Controlling the shannon entropy of quantum systems.
Xing, Yifan; Wu, Jun
2013-01-01
This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking.
Design for cyclic loading endurance of composites
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.
1993-01-01
The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.
How directional mobility affects coexistence in rock-paper-scissors models
NASA Astrophysics Data System (ADS)
Avelino, P. P.; Bazeia, D.; Losano, L.; Menezes, J.; de Oliveira, B. F.; Santos, M. A.
2018-03-01
This work deals with a system of three distinct species that changes in time under the presence of mobility, selection, and reproduction, as in the popular rock-paper-scissors game. The novelty of the current study is the modification of the mobility rule to the case of directional mobility, in which the species move following the direction associated to a larger (averaged) number density of selection targets in the surrounding neighborhood. Directional mobility can be used to simulate eyes that see or a nose that smells, and we show how it may contribute to reduce the probability of coexistence.
Variation of fan tone steadiness for several inflow conditions
NASA Technical Reports Server (NTRS)
Balombin, J. R.
1978-01-01
An amplitude probability density function analysis technique for quantifying the degree of fan noise tone steadiness has been applied to data from a fan tested under a variety of inflow conditions. The test conditions included typical static operation, inflow control by a honeycomb/screen device and forward velocity in a wind tunnel simulating flight. The ratio of mean square sinusoidal-to-random signal content in the fundamental and second harmonic tones was found to vary by more than an order-of-magnitude. Some implications of these results concerning the nature of fan noise generation mechanisms are discussed.
How directional mobility affects coexistence in rock-paper-scissors models.
Avelino, P P; Bazeia, D; Losano, L; Menezes, J; de Oliveira, B F; Santos, M A
2018-03-01
This work deals with a system of three distinct species that changes in time under the presence of mobility, selection, and reproduction, as in the popular rock-paper-scissors game. The novelty of the current study is the modification of the mobility rule to the case of directional mobility, in which the species move following the direction associated to a larger (averaged) number density of selection targets in the surrounding neighborhood. Directional mobility can be used to simulate eyes that see or a nose that smells, and we show how it may contribute to reduce the probability of coexistence.
Expected social utility of life time in the presence of a chronic disease.
Mulder, P G; Hempenius, A L
1993-10-01
Interventive action aimed at reducing the incidence of an irreversible chronic noncommunicable disease in a population has various effects. Hopefully, it increases total longevity in the population and it causes the disease to develop later in time in a smaller portion of the population. In this paper a statistical model is built by which these effects can be estimated. A three dimensional probability density function that underlies this model is changed by the interventive action. It is shown how a three dimensional utility function can be defined to appropriately judge this change.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
A partial differential equation for pseudocontact shift.
Charnock, G T P; Kuprov, Ilya
2014-10-07
It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
Dendritic brushes under theta and poor solvent conditions
NASA Astrophysics Data System (ADS)
Gergidis, Leonidas N.; Kalogirou, Andreas; Charalambopoulos, Antonios; Vlahos, Costas
2013-07-01
The effects of solvent quality on the internal stratification of polymer brushes formed by dendron polymers up to third generation were studied by means of molecular dynamics simulations with Langevin thermostat. The distributions of polymer units, of the free ends, the radii of gyration, and the back folding probabilities of the dendritic spacers were studied at the macroscopic states of theta and poor solvent. For high grafting densities we observed a small decrease in the height of the brush as the solvent quality decreases. The internal stratification in theta solvent was similar to the one we found in good solvent, with two and in some cases three kinds of populations containing short dendrons with weakly extended spacers, intermediate-height dendrons, and tall dendrons with highly stretched spacers. The differences increase as the grafting density decreases and single dendron populations were evident in theta and poor solvent. In poor solvent at low grafting densities, solvent micelles, polymeric pinned lamellae, spherical and single chain collapsed micelles were observed. The scaling dependence of the height of the dendritic brush at high density brushes for both solvents was found to be in agreement with existing analytical results.
Local and neighboring patch conditions alter sex-specific movement in banana weevils.
Carval, Dominique; Perrin, Benjamin; Duyck, Pierre-François; Tixier, Philippe
2015-12-01
Understanding the mechanisms underlying the movements and spread of a species over time and space is a major concern of ecology. Here, we assessed the effects of an individual's sex and the density and sex ratio of conspecifics in the local and neighboring environment on the movement probability of the banana weevil, Cosmopolites sordidus. In a "two patches" experiment, we used radiofrequency identification tags to study the C. sordidus movement response to patch conditions. We showed that local and neighboring densities of conspecifics affect the movement rates of individuals but that the density-dependent effect can be either positive or negative depending on the relative densities of conspecifics in local and neighboring patches. We demonstrated that sex ratio also influences the movement of C. sordidus, that is, the weevil exhibits nonfixed sex-biased movement strategies. Sex-biased movement may be the consequence of intrasexual competition for resources (i.e., oviposition sites) in females and for mates in males. We also detected a high individual variability in the propensity to move. Finally, we discuss the role of demographic stochasticity, sex-biased movement, and individual heterogeneity in movement on the colonization process.
Mangen, M-J J; Nielen, M; Burrell, A M
2002-12-18
We examined the importance of pig-population density in the area of an outbreak of classical swine fever (CSF) for the spread of the infection and the choice of control measures. A spatial, stochastic, dynamic epidemiological simulation model linked to a sector-level market-and-trade model for The Netherlands were used. Outbreaks in sparsely and densely populated areas were compared under four different control strategies and with two alternative trade assumptions. The obligatory control strategy required by current EU legislation was predicted to be enough to eradicate an epidemic starting in an area with sparse pig population. By contrast, additional control measures would be necessary if the outbreak began in an area with high pig density. The economic consequences of using preventive slaughter rather than emergency vaccination as an additional control measure depended strongly on the reactions of trading partners. Reducing the number of animal movements significantly reduced the size and length of epidemics in areas with high pig density. The phenomenon of carrier piglets was included in the model with realistic probabilities of infection by this route, but it made a negligible contribution to the spread of the infection.
NASA Astrophysics Data System (ADS)
Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping
2015-05-01
It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.
Constraining the interior density profile of a Jovian planet from precision gravity field data
NASA Astrophysics Data System (ADS)
Movshovitz, Naor; Fortney, Jonathan J.; Helled, Ravit; Hubbard, William B.; Thorngren, Daniel; Mankovich, Chris; Wahl, Sean; Militzer, Burkhard; Durante, Daniele
2017-10-01
The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properly interpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about the formation mechanism of the planet. Planetary gravity fields are usually described by the coefficients in an expansion of the gravitational potential. Recently, high precision measurements of these coefficients for Jupiter and Saturn have been made by the radio science instruments on the Juno and Cassini spacecraft, respectively.The resulting coefficients come with an associated uncertainty. And while the task of matching a given density profile with a given set of gravity coefficients is relatively straightforward, the question of how best to account for the uncertainty is not. In essentially all prior work on matching models to gravity field data, inferences about planetary structure have rested on imperfect knowledge of the H/He equation of state and on the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet, constrained only by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them.We demonstrate this approach with a sample of Jupiter interior models based on recent Juno data and discuss prospects for Saturn.
NASA Astrophysics Data System (ADS)
Movshovitz, N.; Fortney, J. J.; Helled, R.; Hubbard, W. B.; Mankovich, C.; Thorngren, D.; Wahl, S. M.; Militzer, B.; Durante, D.
2017-12-01
The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properlyinterpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about theformation mechanism of the planet. Recently, very high precision measurements of the gravity coefficients for Saturn have been made by the radio science instrument on the Cassini spacecraft during its Grand Finale orbits. The resulting coefficients come with an associated uncertainty. The task of matching a given density profile to a given set of gravity coefficients is relatively straightforward, but the question of how to best account for the uncertainty is not. In essentially all prior work on matching models to gravity field data inferences about planetary structure have rested on assumptions regarding the imperfectly known H/He equation of state and the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet constrained by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also Bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them. We apply this approach to produce a sample of Saturn interior models based on gravity data from Grand Finale orbits and discuss their implications.
Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps
Adam Brandt
2015-11-15
This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Information Density and Syntactic Repetition.
Temperley, David; Gildea, Daniel
2015-11-01
In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.
SOMKE: kernel density estimation over data streams by sequences of self-organizing maps.
Cao, Yuan; He, Haibo; Man, Hong
2012-08-01
In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastner, S.O.; Bhatia, A.K.
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less
A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics
NASA Astrophysics Data System (ADS)
McDermott, Randall; Weinschenk, Craig
2013-11-01
A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
The risks and returns of stock investment in a financial market
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2013-03-01
The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
A comparative study of nonparametric methods for pattern recognition
NASA Technical Reports Server (NTRS)
Hahn, S. F.; Nelson, G. D.
1972-01-01
The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,
2014-01-01
Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Study of the enhancement-mode AlGaN/GaN high electron mobility transistor with split floating gates
NASA Astrophysics Data System (ADS)
Wang, Hui; Wang, Ning; Jiang, Ling-Li; Zhao, Hai-Yue; Lin, Xin-Peng; Yu, Hong-Yu
2017-11-01
In this work, the charge storage based split floating gates (FGs) enhancement mode (E-mode) AlGaN/GaN high electron mobility transistors (HEMTs) are studied. The simulation results reveal that under certain density of two dimensional electron gas, the variation tendency of the threshold voltage (Vth) with the variation of the blocking dielectric thickness depends on the FG charge density. It is found that when the length sum and isolating spacing sum of the FGs both remain unchanged, the Vth shall decrease with the increasing FGs number but maintaining the device as E-mode. It is also reported that for the FGs HEMT, the failure of a FG will lead to the decrease of Vth as well as the increase of drain current, and the failure probability can be improved significantly with the increase of FGs number.
Experimental demonstration of an Allee effect in microbial populations.
Kaul, RajReni B; Kramer, Andrew M; Dobbs, Fred C; Drake, John M
2016-04-01
Microbial populations can be dispersal limited. However, microorganisms that successfully disperse into physiologically ideal environments are not guaranteed to establish. This observation contradicts the Baas-Becking tenet: 'Everything is everywhere, but the environment selects'. Allee effects, which manifest in the relationship between initial population density and probability of establishment, could explain this observation. Here, we experimentally demonstrate that small populations of Vibrio fischeri are subject to an intrinsic demographic Allee effect. Populations subjected to predation by the bacterivore Cafeteria roenbergensis display both intrinsic and extrinsic demographic Allee effects. The estimated critical threshold required to escape positive density-dependence is around 5, 20 or 90 cells ml(-1)under conditions of high carbon resources, low carbon resources or low carbon resources with predation, respectively. This work builds on the foundations of modern microbial ecology, demonstrating that mechanisms controlling macroorganisms apply to microorganisms, and provides a statistical method to detect Allee effects in data. © 2016 The Author(s).
Experimental demonstration of an Allee effect in microbial populations
Kramer, Andrew M.; Dobbs, Fred C.; Drake, John M.
2016-01-01
Microbial populations can be dispersal limited. However, microorganisms that successfully disperse into physiologically ideal environments are not guaranteed to establish. This observation contradicts the Baas-Becking tenet: ‘Everything is everywhere, but the environment selects’. Allee effects, which manifest in the relationship between initial population density and probability of establishment, could explain this observation. Here, we experimentally demonstrate that small populations of Vibrio fischeri are subject to an intrinsic demographic Allee effect. Populations subjected to predation by the bacterivore Cafeteria roenbergensis display both intrinsic and extrinsic demographic Allee effects. The estimated critical threshold required to escape positive density-dependence is around 5, 20 or 90 cells ml−1 under conditions of high carbon resources, low carbon resources or low carbon resources with predation, respectively. This work builds on the foundations of modern microbial ecology, demonstrating that mechanisms controlling macroorganisms apply to microorganisms, and provides a statistical method to detect Allee effects in data. PMID:27048467
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
NASA Astrophysics Data System (ADS)
Dasgupta, Arunima; Sastry, K. L. N.; Dhinwa, P. S.; Rathore, V. S.; Nathawat, M. S.
2013-08-01
Desertification risk assessment is important in order to take proper measures for its prevention. Present research intends to identify the areas under risk of desertification along with their severity in terms of degradation in natural parameters. An integrated model with fuzzy membership analysis, fuzzy rule-based inference system and geospatial techniques was adopted, including five specific natural parameters namely slope, soil pH, soil depth, soil texture and NDVI. Individual parameters were classified according to their deviation from mean. Membership of each individual values to be in a certain class was derived using the normal probability density function of that class. Thus if a single class of a single parameter is with mean μ and standard deviation σ, the values falling beyond μ + 2 σ and μ - 2 σ are not representing that class, but a transitional zone between two subsequent classes. These are the most important areas in terms of degradation, as they have the lowest probability to be in a certain class, hence highest probability to be extended or narrowed down in next or previous class respectively. Eventually, these are the values which can be easily altered, under extrogenic influences, hence are identified as risk areas. The overall desertification risk is derived by incorporating the different risk severity of each parameter using fuzzy rule-based interference system in GIS environment. Multicriteria based geo-statistics are applied to locate the areas under different severity of desertification risk. The study revealed that in Kota, various anthropogenic pressures are accelerating land deterioration, coupled with natural erosive forces. Four major sources of desertification in Kota are, namely Gully and Ravine erosion, inappropriate mining practices, growing urbanization and random deforestation.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Bremer, J. E.; Harter, T.
2012-08-01
Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).
NASA Astrophysics Data System (ADS)
Stephanik, Brian Michael
This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.
Analytical approach to an integrate-and-fire model with spike-triggered adaptation
NASA Astrophysics Data System (ADS)
Schwalger, Tilo; Lindner, Benjamin
2015-12-01
The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.
Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target
Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji
2009-01-01
In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326
Universal characteristics of fractal fluctuations in prime number distribution
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2014-11-01
The frequency of occurrence of prime numbers at unit number spacing intervals exhibits self-similar fractal fluctuations concomitant with inverse power law form for power spectrum generic to dynamical systems in nature such as fluid flows, stock market fluctuations and population dynamics. The physics of long-range correlations exhibited by fractals is not yet identified. A recently developed general systems theory visualizes the eddy continuum underlying fractals to result from the growth of large eddies as the integrated mean of enclosed small scale eddies, thereby generating a hierarchy of eddy circulations or an inter-connected network with associated long-range correlations. The model predictions are as follows: (1) The probability distribution and power spectrum of fractals follow the same inverse power law which is a function of the golden mean. The predicted inverse power law distribution is very close to the statistical normal distribution for fluctuations within two standard deviations from the mean of the distribution. (2) Fractals signify quantum-like chaos since variance spectrum represents probability density distribution, a characteristic of quantum systems such as electron or photon. (3) Fractal fluctuations of frequency distribution of prime numbers signify spontaneous organization of underlying continuum number field into the ordered pattern of the quasiperiodic Penrose tiling pattern. The model predictions are in agreement with the probability distributions and power spectra for different sets of frequency of occurrence of prime numbers at unit number interval for successive 1000 numbers. Prime numbers in the first 10 million numbers were used for the study.
Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete
2014-01-01
Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2002-01-01
A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...
Properties of Traffic Risk Coefficient
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu
2009-10-01
We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.
Probability mass first flush evaluation for combined sewer discharges.
Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong
2010-01-01
The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
Baena, Martha L.; Escobar, Federico; Halffter, Gonzalo; García–Chávez, Juan H.
2015-01-01
Omorgus suberosus (Fabricius, 1775) has been identified as a potential predator of the eggs of the turtle Lepidochelys olivacea (Eschscholtz, 1829) on one of the main turtle nesting beaches in the world, La Escobilla in Oaxaca, Mexico. This study presents an analysis of the spatio–temporal distribution of the beetle on this beach (in areas of high and low density of L. olivacea nests over two arrival seasons) and an evaluation, under laboratory conditions, of the probability of damage to the turtle eggs by this beetle. O. suberosus adults and larvae exhibited an aggregated pattern at both turtle nest densities; however, aggregation was greater in areas of low nest density, where we found the highest proportion of damaged eggs. Also, there were fluctuations in the temporal distribution of the adult beetles following the arrival of the turtles on the beach. Under laboratory conditions, the beetles quickly damaged both dead eggs and a mixture of live and dead eggs, but were found to consume live eggs more slowly. This suggests that O. suberosus may be recycling organic material; however, its consumption of live eggs may be sufficient in some cases to interrupt the incubation period of the turtle. We intend to apply these results when making decisions regarding the L. olivacea nests on La Escobilla Beach, one of the most important sites for the conservation of this species. PMID:26422148
Chen, Yungting; Shih, Hanyu; Wang, Chunhsiung; Hsieh, Chunyi; Chen, Chihwei; Chen, Yangfang; Lin, Taiyuan
2011-05-09
Based on hybrid inorganic/organic n-ZnO nanorods/p-GaN thin film/poly(3-hexylthiophene)(P3HT) dual heterojunctions, the light emitting diode (LED) emits ultraviolet (UV) radiation (370 nm - 400 nm) and the whole visible light (400 nm -700 nm) at the low injection current density. Meanwhile, under the high injection current density, the UV radiation overwhelmingly dominates the room-temperature electroluminescence spectra, exponentially increases with the injection current density and possesses a narrow full width at half maximum less than 16 nm. Comparing electroluminescence with photoluminescence spectra, an enormously enhanced transition probability of the UV luminescence in the electroluminescence spectra was found. The P3HT layer plays an essential role in helping the UV emission from p-GaN material because of its hole-conductive characteristic as well as the band alignment with respect to p-GaN. With our new finding, the result shown here may pave a new route for the development of high brightness LEDs derived from hybrid inorganic/organic heterojuctions.
Statistical analysis of dislocations and dislocation boundaries from EBSD data.
Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N
2017-08-01
Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan
2017-08-01
We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
Self-Supervised Dynamical Systems
NASA Technical Reports Server (NTRS)
Zak, Michail
2003-01-01
Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.
Turbulence and the Stabilization Principle
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Further results of research, reported in several previous NASA Tech Briefs articles, were obtained on a mathematical formalism for postinstability motions of a dynamical system characterized by exponential divergences of trajectories leading to chaos (including turbulence). To recapitulate: Fictitious control forces are introduced to couple the dynamical equations with a Liouville equation that describes the evolution of the probability density of errors in initial conditions. These forces create a powerful terminal attractor in probability space that corresponds to occurrence of a target trajectory with probability one. The effect in ordinary perceived three-dimensional space is to suppress exponential divergences of neighboring trajectories without affecting the target trajectory. Con sequently, the postinstability motion is represented by a set of functions describing the evolution of such statistical quantities as expectations and higher moments, and this representation is stable. The previously reported findings are analyzed from the perspective of the authors Stabilization Principle, according to which (1) stability is recognized as an attribute of mathematical formalism rather than of underlying physics and (2) a dynamical system that appears unstable when modeled by differentiable functions only can be rendered stable by modifying the dynamical equations to incorporate intrinsic stochasticity.
Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha
2013-09-01
Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.
Permeability structure and its influence on microbial activity at off-Shimokita basin, Japan
NASA Astrophysics Data System (ADS)
Tanikawa, W.; Yamada, Y.; Sanada, Y.; Kubo, Y.; Inagaki, F.
2016-12-01
The microbial populations and the limit of microbial life are probably limited by chemical, physical, and geological conditions, such as temperature, pore water chemistry, pH, and water activity; however, the key parameters affecting growth in deep subseafloor sediments remain unclarified (Hinrichs and Inagaki 2012). IODP expedition 337 was conducted near a continental margin basin off Shimokita Peninsula, Japan to investigate the microbial activity under deep marine coalbed sediments down to 2500 mbsf. Inagaki et al. (2015) discovered that microbial abundance decreased markedly with depth (the lowest cell density of <1 cell/cm3 was recorded below 2000 mbsf), and that the coal bed layers had relatively higher cell densities. In this study, permeability was measured on core samples from IODP Expedition 337 and Expedition CK06-06 in the D/V Chikyu shakedown cruise. Permeability was measured at in-situ effective pressure condition. Permeability was calculated by the steady state flow method by keeping differential pore pressure from 0.1 to 0.8 MPa.Our results show that the permeability for core samples decreases with depth from 10-16 m2 on the seafloor to 10-20 m2 at the bottom of hole. However, permeability is highly scattered within the coal bed unit (1900 to 2000 mbsf). Permeabilities for sandstone and coal is higher than those for siltstone and shale, therefore the scatter of the permeabilities at the same unit is due to the high variation of lithology. The highest permeability was observed in coal samples and this is probably due to formation of micro cracks (cleats). Permeability estimated from the NMR logging using the empirical parameters is around two orders of magnitude higher than permeability of core samples, even though the relative permeability variation at vertical direction is quite similar between core and logging data.The higher cell density is observed in the relatively permeable formation. On the other hand, the correlation between cell density, water activity, and porosity is not clear. On the assumption that pressure gradient is constant through the depth, flow rate can be proportional to permeability of sediments. Flow rate probably restricts the availability of energy and nutrient for microorganism, therefore permeability might have influenced on the microbial activity in the coalbed basin.
Quantum mechanical probability current as electromagnetic 4-current from topological EM fields
NASA Astrophysics Data System (ADS)
van der Mark, Martin B.
2015-09-01
Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
Accounting for unsearched areas in estimating wind turbine-caused fatality
Huso, Manuela M.P.; Dalthorp, Dan
2014-01-01
With wind energy production expanding rapidly, concerns about turbine-induced bird and bat fatality have grown and the demand for accurate estimation of fatality is increasing. Estimation typically involves counting carcasses observed below turbines and adjusting counts by estimated detection probabilities. Three primary sources of imperfect detection are 1) carcasses fall into unsearched areas, 2) carcasses are removed or destroyed before sampling, and 3) carcasses present in the searched area are missed by observers. Search plots large enough to comprise 100% of turbine-induced fatality are expensive to search and may nonetheless contain areas unsearchable because of dangerous terrain or impenetrable brush. We evaluated models relating carcass density to distance from the turbine to estimate the proportion of carcasses expected to fall in searched areas and evaluated the statistical cost of restricting searches to areas near turbines where carcass density is highest and search conditions optimal. We compared 5 estimators differing in assumptions about the relationship of carcass density to distance from the turbine. We tested them on 6 different carcass dispersion scenarios at each of 3 sites under 2 different search regimes. We found that even simple distance-based carcass-density models were more effective at reducing bias than was a 5-fold expansion of the search area. Estimators incorporating fitted rather than assumed models were least biased, even under restricted searches. Accurate estimates of fatality at wind-power facilities will allow critical comparisons of rates among turbines, sites, and regions and contribute to our understanding of the potential environmental impact of this technology.
Action Potential Waveform Variability Limits Multi-Unit Separation in Freely Behaving Rats
Stratton, Peter; Cheung, Allen; Wiles, Janet; Kiyatkin, Eugene; Sah, Pankaj; Windels, François
2012-01-01
Extracellular multi-unit recording is a widely used technique to study spontaneous and evoked neuronal activity in awake behaving animals. These recordings are done using either single-wire or mulitwire electrodes such as tetrodes. In this study we have tested the ability of single-wire electrodes to discriminate activity from multiple neurons under conditions of varying noise and neuronal cell density. Using extracellular single-unit recording, coupled with iontophoresis to drive cell activity across a wide dynamic range, we studied spike waveform variability, and explored systematic differences in single-unit spike waveform within and between brain regions as well as the influence of signal-to-noise ratio (SNR) on the similarity of spike waveforms. We also modelled spike misclassification for a range of cell densities based on neuronal recordings obtained at different SNRs. Modelling predictions were confirmed by classifying spike waveforms from multiple cells with various SNRs using a leading commercial spike-sorting system. Our results show that for single-wire recordings, multiple units can only be reliably distinguished under conditions of high recording SNR (≥4) and low neuronal density (≈20,000/ mm3). Physiological and behavioural changes, as well as technical limitations typical of awake animal preparations, reduce the accuracy of single-channel spike classification, resulting in serious classification errors. For SNR <4, the probability of misclassifying spikes approaches 100% in many cases. Our results suggest that in studies where the SNR is low or neuronal density is high, separation of distinct units needs to be evaluated with great caution. PMID:22719894
Robles, Hugo; Ciudad, Carlos
2012-04-01
Despite extensive research on the effects of habitat fragmentation, the ecological mechanisms underlying colonization and extinction processes are poorly known, but knowledge of these mechanisms is essential to understanding the distribution and persistence of populations in fragmented habitats. We examined these mechanisms through multiseason occupancy models that elucidated patch-occupancy dynamics of Middle Spotted Woodpeckers (Dendrocopos medius) in northwestern Spain. The number of occupied patches was relatively stable from 2000 to 2010 (15-24% of 101 patches occupied every year) because extinction was balanced by recolonization. Larger and higher quality patches (i.e., higher density of oaks >37 cm dbh [diameter at breast height]) were more likely to be occupied. Habitat quality (i.e., density of large oaks) explained more variation in patch colonization and extinction than did patch size and connectivity, which were both weakly associated with probabilities of turnover. Patches of higher quality were more likely to be colonized than patches of lower quality. Populations in high-quality patches were less likely to become extinct. In addition, extinction in a patch was strongly associated with local population size but not with patch size, which means the latter may not be a good surrogate of population size in assessments of extinction probability. Our results suggest that habitat quality may be a primary driver of patch-occupancy dynamics and may increase the accuracy of models of population survival. We encourage comparisons of competing models that assess occupancy, colonization, and extinction probabilities in a single analytical framework (e.g., dynamic occupancy models) so as to shed light on the association of habitat quality and patch geometry with colonization and extinction processes in different settings and species. ©2012 Society for Conservation Biology.
Derived distribution of floods based on the concept of partial area coverage with a climatic appeal
NASA Astrophysics Data System (ADS)
Iacobellis, Vito; Fiorentino, Mauro
2000-02-01
A new rationale for deriving the probability distribution of floods and help in understanding the physical processes underlying the distribution itself is presented. On the basis of this a model that presents a number of new assumptions is developed. The basic ideas are as follows: (1) The peak direct streamflow Q can always be expressed as the product of two random variates, namely, the average runoff per unit area ua and the peak contributing area a; (2) the distribution of ua conditional on a can be related to that of the rainfall depth occurring in a duration equal to a characteristic response time тa of the contributing part of the basin; and (3) тa is assumed to vary with a according to a power law. Consequently, the probability density function of Q can be found as the integral, over the total basin area A of that of a times the density function of ua given a. It is suggested that ua can be expressed as a fraction of the excess rainfall and that the annual flood distribution can be related to that of Q by the hypothesis that the flood occurrence process is Poissonian. In the proposed model it is assumed, as an exploratory attempt, that a and ua are gamma and Weibull distributed, respectively. The model was applied to the annual flood series of eight gauged basins in Basilicata (southern Italy) with catchment areas ranging from 40 to 1600 km2. The results showed strong physical consistence as the parameters tended to assume values in good agreement with well-consolidated geomorphologic knowledge and suggested a new key to understanding the climatic control of the probability distribution of floods.
Füchsel, Gernot; Schimka, Selina; Saalfrank, Peter
2013-09-12
The role of electronic friction and, more generally, of nonadiabatic effects during dynamical processes at the gas/metal surface interface is still a matter of discussion. In particular, it is not clear if electronic nonadiabaticity has an effect under "mild" conditions, when molecules in low rovibrational states interact with a metal surface. In this paper, we investigate the role of electronic friction on the dissociative sticking and (inelastic) scattering of vibrationally and rotationally cold H2 molecules at a Ru(0001) surface theoretically. For this purpose, classical molecular dynamics with electronic friction (MDEF) calculations are performed and compared to MD simulations without friction. The two H atoms move on a six-dimensional potential energy surface generated from gradient-corrected density functional theory (DFT), that is, all molecular degrees of freedom are accounted for. Electronic friction is included via atomic friction coefficients obtained from an embedded atom, free electron gas (FEG) model, with embedding densities taken from gradient-corrected DFT. We find that within this model, dissociative sticking probabilities as a function of impact kinetic energies and impact angles are hardly affected by nonadiabatic effects. If one accounts for a possibly enhanced electronic friction near the dissociation barrier, on the other hand, reduced sticking probabilities are observed, in particular, at high impact energies. Further, there is always an influence on inelastic scattering, in particular, as far as the translational and internal energy distribution of the reflected molecules is concerned. Additionally, our results shed light on the role played by the velocity distribution of the incident molecular beam for adsorption probabilities, where, in particular, at higher impact energies, large effects are found.
Weyer-Menkhoff, I; Thrun, M C; Lötsch, J
2018-05-01
Pain in response to noxious cold has a complex molecular background probably involving several types of sensors. A recent observation has been the multimodal distribution of human cold pain thresholds. This study aimed at analysing reproducibility and stability of this observation and further exploration of data patterns supporting a complex background. Pain thresholds to noxious cold stimuli (range 32-0 °C, tonic: temperature decrease -1 °C/s, phasic: temperature decrease -8 °C/s) were acquired in 148 healthy volunteers. The probability density distribution was analysed using machine-learning derived methods implemented as Gaussian mixture modeling (GMM), emergent self-organizing maps and self-organizing swarms of data agents. The probability density function of pain responses was trimodal (mean thresholds at 25.9, 18.4 and 8.0 °C for tonic and 24.5, 18.1 and 7.5 °C for phasic stimuli). Subjects' association with Gaussian modes was consistent between both types of stimuli (weighted Cohen's κ = 0.91). Patterns emerging in self-organizing neuronal maps and swarms could be associated with different trends towards decreasing cold pain sensitivity in different Gaussian modes. On self-organizing maps, the third Gaussian mode emerged as particularly distinct. Thresholds at, roughly, 25 and 18 °C agree with known working temperatures of TRPM8 and TRPA1 ion channels, respectively, and hint at relative local dominance of either channel in respective subjects. Data patterns suggest involvement of further distinct mechanisms in cold pain perception at lower temperatures. Findings support data science approaches to identify biologically plausible hints at complex molecular mechanisms underlying human pain phenotypes. Sensitivity to pain is heterogeneous. Data-driven computational research approaches allow the identification of subgroups of subjects with a distinct pattern of sensitivity to cold stimuli. The subgroups are reproducible with different types of noxious cold stimuli. Subgroups show pattern that hints at distinct and inter-individually different types of the underlying molecular background. © 2018 European Pain Federation - EFIC®.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
NASA Astrophysics Data System (ADS)
Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi
2014-07-01
{110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Technical Reports Server (NTRS)
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Bridge reliability assessment based on the PDF of long-term monitored extreme strains
NASA Astrophysics Data System (ADS)
Jiao, Meiju; Sun, Limin
2011-04-01
Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.
Spacecraft Collision Avoidance
NASA Astrophysics Data System (ADS)
Bussy-Virat, Charles
The rapid increase of the number of objects in orbit around the Earth poses a serious threat to operational spacecraft and astronauts. In order to effectively avoid collisions, mission operators need to assess the risk of collision between the satellite and any other object whose orbit is likely to approach its trajectory. Several algorithms predict the probability of collision but have limitations that impair the accuracy of the prediction. An important limitation is that uncertainties in the atmospheric density are usually not taken into account in the propagation of the covariance matrix from current epoch to closest approach time. The Spacecraft Orbital Characterization Kit (SpOCK) was developed to accurately predict the positions and velocities of spacecraft. The central capability of SpOCK is a high accuracy numerical propagator of spacecraft orbits and computations of ancillary parameters. The numerical integration uses a comprehensive modeling of the dynamics of spacecraft in orbit that includes all the perturbing forces that a spacecraft is subject to in orbit. In particular, the atmospheric density is modeled by thermospheric models to allow for an accurate representation of the atmospheric drag. SpOCK predicts the probability of collision between two orbiting objects taking into account the uncertainties in the atmospheric density. Monte Carlo procedures are used to perturb the initial position and velocity of the primary and secondary spacecraft from their covariance matrices. Developed in C, SpOCK supports parallelism to quickly assess the risk of collision so it can be used operationally in real time. The upper atmosphere of the Earth is strongly driven by the solar activity. In particular, abrupt transitions from slow to fast solar wind cause important disturbances of the atmospheric density, hence of the drag acceleration that spacecraft are subject to. The Probability Distribution Function (PDF) model was developed to predict the solar wind speed five days in advance. In particular, the PDF model is able to predict rapid enhancements in the solar wind speed. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. En-semble forecasts provide the forecasters with an estimation of the uncertainty in the prediction, which can be used to derive uncertainties in the atmospheric density and in the drag acceleration. The dissertation then demonstrates that uncertainties in the atmospheric density result in large uncertainties in the prediction of the probability of collision. As an example, the effects of a geomagnetic storm on the probability of collision are illustrated. The research aims at providing tools and analyses that help understand and predict the effects of uncertainties in the atmospheric density on the probability of collision. The ultimate motivation is to support mission operators in making the correct decision with regard to a potential collision avoidance maneuver by providing an uncertainty on the prediction of the probability of collision instead of a single value. This approach can help avoid performing unnecessary costly maneuvers, while making sure that the risk of collision is fully evaluated.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Brownian Motion with Active Fluctuations
NASA Astrophysics Data System (ADS)
Romanczuk, Pawel; Schimansky-Geier, Lutz
2011-06-01
We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.
Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-04-17
We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.
NASA Astrophysics Data System (ADS)
Alimi, Jean-Michel; de Fromont, Paul
2018-04-01
The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.
The SDSS-XDQSO quasar targeting catalog
NASA Astrophysics Data System (ADS)
Bovy, Jo; Hennawi, J. F.; Hogg, D. W.; Myers, A. D.; Ross, N. P.
2011-01-01
We present the SDSS-XDQSO quasar targeting catalog for efficient flux-based quasar target selection down to the faint limit of the SDSS catalog, even at medium redshifts (2.5 < z < 3). We build models of the distributions of stars and quasars in flux space down to the flux limit by applying the extreme-deconvolution method (XD) to estimate the underlying density. We properly convolve this density with the flux uncertainties when evaluating the probability that an object is a quasar. This results in a targeting algorithm that is more principled, more efficient, and faster than other similar methods. We apply the algorithm to derive low- (z < 2.2), medium- (2.2 <= z 3.5) quasar probabilities for all 160,904,060 point-sources with dereddened i-and magnitude between 17.75 and 22.45 mag in SDSS Data Release 8. The catalog can be used to define a uniformly selected and efficient low- or medium-redshift quasar survey, such as that needed for the SDSS-III Baryon Oscillation Spectroscopic Survey project. We show that the XDQSO technique performs as well as the current best photometric quasar selection technique at low redshift, and out-performs all other flux-based methods for selecting the medium-redshift quasars of our primary interest. Research supported by NASA (grant NNX08AJ48G) and the NSF (grant AST-0908357).
NASA Astrophysics Data System (ADS)
Garcia-Castello, Nuria; Illera, Sergio; Guerra, Roberto; Prades, Joan Daniel; Ossicini, Stefano; Cirera, Albert
2013-08-01
We study the details of electronic transport related to the atomistic structure of silicon quantum dots embedded in a silicon dioxide matrix using ab initio calculations of the density of states. Several structural and composition features of quantum dots (QDs), such as diameter and amorphization level, are studied and correlated with transport under transfer Hamiltonian formalism. The current is strongly dependent on the QD density of states and on the conduction gap, both dependent on the dot diameter. In particular, as size increases, the available states inside the QD increase, while the QD band gap decreases due to relaxation of quantum confinement. Both effects contribute to increasing the current with the dot size. Besides, valence band offset between the band edges of the QD and the silica, and conduction band offset in a minor grade, increases with the QD diameter up to the theoretical value corresponding to planar heterostructures, thus decreasing the tunneling transmission probability and hence the total current. We discuss the influence of these parameters on electron and hole transport, evidencing a correlation between the electron (hole) barrier value and the electron (hole) current, and obtaining a general enhancement of the electron (hole) transport for larger (smaller) QD. Finally, we show that crystalline and amorphous structures exhibit enhanced probability of hole and electron current, respectively.
An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring
NASA Astrophysics Data System (ADS)
Li, J. Y.; Kitanidis, P. K.
2013-12-01
Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Lövy, Matěj; Šklíba, Jan; Hrouzková, Ema; Dvořáková, Veronika; Nevo, Eviatar; Šumbera, Radim
2015-01-01
A costly search for food in subterranean rodents resulted in various adaptations improving their foraging success under given ecological conditions. In Spalax ehrenbergi superspecies, adaptations to local ecological conditions can promote speciation, which was recently supposed to occur even in sympatry at sites where two soil types of contrasting characteristics abut each other. Quantitative description of ecological conditions in such a site has been, nevertheless, missing. We measured characteristics of food supply and soil within 16 home ranges of blind mole rats Spalax galili in an area subdivided into two parts formed by basaltic soil and pale rendzina. We also mapped nine complete mole rat burrow systems to compare burrowing patterns between the soil types. Basaltic soil had a higher food supply and was harder than rendzina even under higher moisture content and lower bulk density. Population density of mole rats was five-times lower in rendzina, possibly due to the lower food supply and higher cover of Sarcopoterium shrubs which seem to be avoided by mole rats. A combination of food supply and soil parameters probably influences burrowing patterns resulting in shorter and more complex burrow systems in basaltic soil. PMID:26192762
Density PDFs of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2012-09-01
The probability distribution functions (PDFs) of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5∘ and |b|≥ 5∘ are considered separately. Our results provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.
Cauchy flights in confining potentials
NASA Astrophysics Data System (ADS)
Garbaczewski, Piotr
2010-03-01
We analyze confining mechanisms for Lévy flights evolving under an influence of external potentials. Given a stationary probability density function (pdf), we address the reverse engineering problem: design a jump-type stochastic process whose target pdf (eventually asymptotic) equals the preselected one. To this end, dynamically distinct jump-type processes can be employed. We demonstrate that one “targeted stochasticity” scenario involves Langevin systems with a symmetric stable noise. Another derives from the Lévy-Schrödinger semigroup dynamics (closely linked with topologically induced super-diffusions), which has no standard Langevin representation. For computational and visualization purposes, the Cauchy driver is employed to exemplify our considerations.
Integrated stationary Ornstein-Uhlenbeck process, and double integral processes
NASA Astrophysics Data System (ADS)
Abundo, Mario; Pirozzi, Enrica
2018-03-01
We find a representation of the integral of the stationary Ornstein-Uhlenbeck (ISOU) process in terms of Brownian motion Bt; moreover, we show that, under certain conditions on the functions f and g , the double integral process (DIP) D(t) = ∫βt g(s) (∫αs f(u) dBu) ds can be thought as the integral of a suitable Gauss-Markov process. Some theoretical and application details are given, among them we provide a simulation formula based on that representation by which sample paths, probability densities and first passage times of the ISOU process are obtained; the first-passage times of the DIP are also studied.
Immunohistochemical detection of p53 protein in ameloblastoma types.
el-Sissy, N A
1999-05-01
Overexpression of p53 protein in unicystic ameloblastoma (uAB) is denser than in the conventional ameloblastoma (cAB) type, indicating increased wild type p53--suppressing the growth potential of uAB and denoting the early event of neoplastic transformation, probably of a previous odontogenic cyst. Overexpression of p53 in borderline cAB and malignant ameloblastoma (mAB) types might reflect a mutational p53 protein playing an oncogenic role, promoting tumour growth. Overexpression of p53 protein could be a valid screening method for predicting underlying malignant genetic changes in AB types, through increased frequency of immunoreactive cells or increased staining density.
Constructiveness and destructiveness of temperature in asymmetric quantum pseudo dot qubit system
NASA Astrophysics Data System (ADS)
Chen, Ying-Jie; Song, Hai-Tao; Xiao, Jing-Lin
2018-06-01
By using the variational method of the Pekar type, we theoretically study the temperature effects on the asymmetric quantum pseudo dot qubit with a pseudoharmonic potential under an electromagnetic field. The numerical results are analyzed and discussed in detail and show that the relationships of the ground and first excited state energies, the electron oscillation period and the electron probability density in the superposition state of the ground state and the first-excited state with the temperature, the chemical potential, the pseudoharmonic potential, the electric field strength, the cyclotron frequency, the electron phonon coupling constant, the transverse and longitudinal effective confinement length, respectively.
NASA Astrophysics Data System (ADS)
Ulanov, S. F.
1990-06-01
A method proposed for investigating the statistics of bulk optical breakdown relies on multifrequency lasers, which eliminates the influence of the laser radiation intensity statistics. The method is based on preliminary recording of the peak intensity statistics of multifrequency laser radiation pulses at the caustic using the optical breakdown threshold of K8 glass. The probability density distribution function was obtained at the focus for the peak intensities of the radiation pulses of a multifrequency laser. This method may be used to study the self-interaction under conditions of bulk optical breakdown of transparent dielectrics.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Peculiarities of biological action of hadrons of space radiation.
Akoev, I G; Yurov, S S
1975-01-01
Biological investigations in space enable one to make a significant contribution on high-energy hadrons to biological effects under the influence of factors of space flights. Physical and molecular principles of the action of high-energy hadrons are analysed. Genetic and somatic hadron effects produced by the secondary radiation from 70 GeV protons have been studied experimentally. The high biological effectiveness of hadrons, great variability in biological effects, and specifically of their action, are associated with strong interactions of high-energy hadrons. These are the probability of nuclear interaction with any atom nucleus, generation of a great number of secondary particles (among them, probably, highly effective multicharged and heavy nuclei, antiprotons, pi(-)-mesons), and the spatial distribution of secondary particles as a narrow cone with extremely high density of particles in its first part. The secondary radiation generated by high- and superhigh-energy hadrons upon their interaction with the spaceship is likely to be the greatest hazard of radiation to the crew during space flights.
NASA Technical Reports Server (NTRS)
Filer, Elizabeth D.; Barnes, Norman P.; Morrison, Clyde A.
1991-01-01
The calculated energy levels, the branching ratios, and the estimated thresholds for thulium operating on the 3F4 to 3H6 transitions are reported. Garnet materials with the general formula A3B2C3O12 are evaluated. Calculations are performed for the A side under the assumption of D2 symmetry. X-ray data available in the literature are used to evaluate the crystal-field components, A sub nm. Even-n components are employed to calculate the crystal-field splittings within the manifold. Thermal occupation factors are determined in a straightforward manner using a Boltzmann distribution for the respective manifolds. Odd-n components are applied to calculate the transition probabilities for electric field transitions. It is determined that the magnetic dipole contributions to the transition probability are comparable to the electric dipole contributions in some cases. Thresholds as a function of the density of thulium atoms are calculated.
Bouguer Images of the North American Craton
NASA Technical Reports Server (NTRS)
Arvidson, R. E.; Bindschadler, D.; Bowring, S.; Eddy, M.; Guinness, E.; Leff, C.
1985-01-01
Processing of existing gravity and aeromagnetic data with modern methods is providing new insights into crustal and mantle structures for large parts of the United States and Canada. More than three-quarters of a million ground station readings of gravity are now available for this region. These data offer a wealth of information on crustal and mantle structures when reduced and displayed as Bouguer anomalies, where lateral variations are controlled by the size, shape and densities of underlying materials. Digital image processing techniques were used to generate Bouguer images that display more of the granularity inherent in the data as compared with existing contour maps. A dominant NW-SE linear trend of highs and lows can be seen extending from South Dakota, through Nebaska, and into Missouri. This trend is probably related to features created during an early and perhaps initial episode of crustal assembly by collisional processes. The younger granitic materials are probably a thin cover over an older crust.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
NASA Astrophysics Data System (ADS)
Benda, L. E.
2009-12-01
Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and sediment storage are described by their probability densities. There are a number of general principles (hypotheses) that stem from this conceptual and numerical framework that may inform the science of erosion and sedimentation in river basins. Rainstorms and other perturbations, characterized by probability distributions of event frequency and magnitude, stochastically drive sediment influx to channel networks. The frequency-magnitude distribution of sediment supply that is typically skewed reflects strong interactions among climate, topography, vegetation, and geotechnical controls that vary between regions; the distribution varies systematically with basin area and the spatial pattern of erosion sources. Probability densities of sediment flux and storage evolve from more to less skewed forms downstream in river networks due to the convolution of the population of sediment sources in a watershed that should vary with climate, network patterns, topography, spatial scale, and degree of erosion asynchrony. The sediment flux and storage distributions are also transformed downstream due to diffusion, storage, interference, and attrition. In stochastic systems, the characteristically pulsed sediment supply and transport can create translational or stationary-diffusive valley and channel depositional landforms, the geometries of which are governed by sediment flux-network interactions. Episodic releases of sediment to the network can also drive a system memory reflected in a Hurst Effect in sediment yields and thus in sedimentological records. Similarly, discreet events of punctuated erosion on hillslopes can lead to altered surface and subsurface properties of a population of erosion source areas that can echo through time and affect subsequent erosion and sediment flux rates. Spatial patterns of probability densities have implications for the frequency and magnitude of sediment transport and storage and thus for the formation of alluvial and colluvial landforms throughout watersheds. For instance, the combination and interference of probability densities of sediment flux at confluences creates patterns of riverine heterogeneity, including standing waves of sediment with associated age distributions of deposits that can vary from younger to older depending on network geometry and position. Although the watershed world of probability densities is rarified and typically confined to research endeavors, it has real world implications for the day-to-day work on hillslopes and in fluvial systems, including measuring erosion, sediment transport, mapping channel morphology and aquatic habitats, interpreting deposit stratigraphy, conducting channel restoration, and applying environmental regulations. A question for the geomorphology community is whether the stochastic framework is useful for advancing our understanding of erosion and sedimentation and whether it should stimulate research to further develop, refine and test these and other principles. For example, a changing climate should lead to shifts in probability densities of erosion, sediment flux, storage, and associated habitats and thus provide a useful index of climate change in earth science forecast models.
Shenoi, V N; Ali, S Z; Prasad, N G
2016-02-01
In holometabolous animals such as Drosophila melanogaster, larval crowding can affect a wide range of larval and adult traits. Adults emerging from high larval density cultures have smaller body size and increased mean life span compared to flies emerging from low larval density cultures. Therefore, adaptation to larval crowding could potentially affect adult longevity as a correlated response. We addressed this issue by studying a set of large, outbred populations of D. melanogaster, experimentally evolved for adaptation to larval crowding for 83 generations. We assayed longevity of adult flies from both selected (MCUs) and control populations (MBs) after growing them at different larval densities. We found that MCUs have evolved increased mean longevity compared to MBs at all larval densities. The interaction between selection regime and larval density was not significant, indicating that the density dependence of mean longevity had not evolved in the MCU populations. The increase in longevity in MCUs can be partially attributed to their lower rates of ageing. It is also noteworthy that reaction norm of dry body weight, a trait probably under direct selection in our populations, has indeed evolved in MCU populations. To the best of our knowledge, this is the first report of the evolution of adult longevity as a correlated response of adaptation to larval crowding. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
The large-scale gravitational bias from the quasi-linear regime.
NASA Astrophysics Data System (ADS)
Bernardeau, F.
1996-08-01
It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by <δ^p^({vec}(x)_1_)δ^q^({vec}(x)_2_)>_c_=C_p,q_ <δ^2^({vec}(x))>^p+q-2^ <δ({vec}(x)_1_)δ({vec}(x)_2_)> where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.
Short-term response of Dicamptodon tenebrosus larvae to timber management in southwestern Oregon
Leuthold, Niels; Adams, Michael J.; Hayes, John P.
2012-01-01
In the Pacific Northwest, previous studies have found a negative effect of timber management on the abundance of stream amphibians, but results have been variable and region specific. These studies have generally used survey methods that did not account for differences in capture probability and focused on stands that were harvested under older management practices. We examined the influences of contemporary forest practices on larval Dicamptodon tenebrosus as part of the Hinkle Creek paired watershed study. We used a mark-recapture analysis to estimate D. tenebrosus density at 100 1-m sites spread throughout the basin and used extended linear models that accounted for correlation resulting from the repeated surveys at sites across years. Density was associated with substrate, but we found no evidence of an effect of harvest. While holding other factors constant, the model-averaged estimates indicated; 1) each 10% increase in small cobble or larger substrate increased median density of D. tenebrosus 1.05 times, 2) each 100-ha increase in the upstream area drained decreased median density of D. tenebrosus 0.96 times, and 3) increasing the fish density in the 40 m around a site by 0.01 increased median salamander density 1.01 times. Although this study took place in a single basin, it suggests that timber management in similar third-order basins of the southwestern Oregon Cascade foothills is unlikely to have short-term effects of D. tenebrosus larvae.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
Coulomb Impurity Potential RbCl Quantum Pseudodot Qubit
NASA Astrophysics Data System (ADS)
Ma, Xin-Jun; Qi, Bin; Xiao, Jing-Lin
2015-08-01
By employing a variational method of Pekar type, we study the eigenenergies and the corresponding eigenfunctions of the ground and the first-excited states of an electron strongly coupled to electron-LO in a RbCl quantum pseudodot (QPD) with a hydrogen-like impurity at the center. This QPD system may be used as a two-level quantum qubit. The expressions of electron's probability density versus time and the coordinates, and the oscillating period versus the Coulombic impurity potential and the polaron radius have been derived. The investigated results indicate ① that the probability density of the electron oscillates in the QPD with a certain oscillating period of , ② that due to the presence of the asymmetrical potential in the z direction of the RbCl QPD, the electron probability density shows double-peak configuration, whereas there is only one peak if the confinement is a two-dimensional symmetric structure in the xy plane of the QPD, ③ that the oscillation period is a decreasing function of the Coulombic impurity potential, whereas it is an increasing one of the polaron radius.
A simple approach to nonlinear estimation of physical systems
Christakos, G.
1988-01-01
Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Does probability of occurrence relate to population dynamics?
Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.
2014-01-01
Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.
Rosa, B F J V; Dias-Silva, M V D; Alves, R G
2013-02-01
This study describes the structure of the Chironomidae community associated with bryophytes in a first-order stream located in a biological reserve of the Atlantic Forest, during two seasons. Samples of bryophytes adhered to rocks along a 100-m stretch of the stream were removed with a metal blade, and 200-mL pots were filled with the samples. The numerical density (individuals per gram of dry weight), Shannon's diversity index, Pielou's evenness index, the dominance index (DI), and estimated richness were calculated for each collection period (dry and rainy). Linear regression analysis was employed to test the existence of a correlation between rainfall and the individual's density and richness. The high numerical density and richness of Chironomidae taxa observed are probably related to the peculiar conditions of the bryophyte habitat. The retention of larvae during periods of higher rainfall contributed to the high density and richness of Chironomidae larvae. The rarefaction analysis showed higher richness in the rainy season related to the greater retention of food particles. The data from this study show that bryophytes provide stable habitats for the colonization by and refuge of Chironomidae larvae, mainly under conductions of faster water flow and higher precipitation.
Density estimation in a wolverine population using spatial capture-recapture models
Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin
2011-01-01
Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.
Summary of intrinsic and extrinsic factors affecting detection probability of marsh birds
Conway, C.J.; Gibbs, J.P.
2011-01-01
Many species of marsh birds (rails, bitterns, grebes, etc.) rely exclusively on emergent marsh vegetation for all phases of their life cycle, and many organizations have become concerned about the status and persistence of this group of birds. Yet, marsh birds are notoriously difficult to monitor due to their secretive habits. We synthesized the published and unpublished literature and summarized the factors that influence detection probability of secretive marsh birds in North America. Marsh birds are more likely to respond to conspecific than heterospecific calls, and seasonal peak in vocalization probability varies among co-existing species. The effectiveness of morning versus evening surveys varies among species and locations. Vocalization probability appears to be positively correlated with density in breeding Virginia Rails (Rallus limicola), Soras (Porzana carolina), and Clapper Rails (Rallus longirostris). Movement of birds toward the broadcast source creates biases when using count data from callbroadcast surveys to estimate population density. Ambient temperature, wind speed, cloud cover, and moon phase affected detection probability in some, but not all, studies. Better estimates of detection probability are needed. We provide recommendations that would help improve future marsh bird survey efforts and a list of 14 priority information and research needs that represent gaps in our current knowledge where future resources are best directed. ?? Society of Wetland Scientists 2011.
Self-Organization in 2D Traffic Flow Model with Jam-Avoiding Drive
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-04-01
A stochastic cellular automaton (CA) model is presented to investigate the traffic jam by self-organization in the two-dimensional (2D) traffic flow. The CA model is the extended version of the 2D asymmetric exclusion model to take into account jam-avoiding drive. Each site contains either a car moving to the up, a car moving to the right, or is empty. A up car can shift right with probability p ja if it is blocked ahead by other cars. It is shown that the three phases (the low-density phase, the intermediate-density phase and the high-density phase) appear in the traffic flow. The intermediate-density phase is characterized by the right moving of up cars. The jamming transition to the high-density jamming phase occurs with higher density of cars than that without jam-avoiding drive. The jamming transition point p 2c increases with the shifting probability p ja. In the deterministic limit of p ja=1, it is found that a new jamming transition occurs from the low-density synchronized-shifting phase to the high-density moving phase with increasing density of cars. In the synchronized-shifting phase, all up cars do not move to the up but shift to the right by synchronizing with the move of right cars. We show that the jam-avoiding drive has an important effect on the dynamical jamming transition.
Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-06-15
The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.
Single-molecule stochastic times in a reversible bimolecular reaction
NASA Astrophysics Data System (ADS)
Keller, Peter; Valleriani, Angelo
2012-08-01
In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.
Probability density function approach for compressible turbulent reacting flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.
1994-01-01
The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.
2018-03-01
Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.
A case cluster of variant Creutzfeldt-Jakob disease linked to the Kingdom of Saudi Arabia.
Coulthart, Michael B; Geschwind, Michael D; Qureshi, Shireen; Phielipp, Nicolas; Demarsh, Alex; Abrams, Joseph Y; Belay, Ermias; Gambetti, Pierluigi; Jansen, Gerard H; Lang, Anthony E; Schonberger, Lawrence B
2016-10-01
As of mid-2016, 231 cases of variant Creutzfeldt-Jakob disease-the human form of a prion disease of cattle, bovine spongiform encephalopathy-have been reported from 12 countries. With few exceptions, the affected individuals had histories of extended residence in the UK or other Western European countries during the period (1980-96) of maximum global risk for human exposure to bovine spongiform encephalopathy. However, the possibility remains that other geographic foci of human infection exist, identification of which may help to foreshadow the future of the epidemic. We report results of a quantitative analysis of country-specific relative risks of infection for three individuals diagnosed with variant Creutzfeldt-Jakob disease in the USA and Canada. All were born and raised in Saudi Arabia, but had histories of residence and travel in other countries. To calculate country-specific relative probabilities of infection, we aligned each patient's life history with published estimates of probability distributions of incubation period and age at infection parameters from a UK cohort of 171 variant Creutzfeldt-Jakob disease cases. The distributions were then partitioned into probability density fractions according to time intervals of the patient's residence and travel history, and the density fractions were combined by country. This calculation was performed for incubation period alone, age at infection alone, and jointly for incubation and age at infection. Country-specific fractions were normalized either to the total density between the individual's dates of birth and symptom onset ('lifetime'), or to that between 1980 and 1996, for a total of six combinations of parameter and interval. The country-specific relative probability of infection for Saudi Arabia clearly ranked highest under each of the six combinations of parameter × interval for Patients 1 and 2, with values ranging from 0.572 to 0.998, respectively, for Patient 2 (age at infection × lifetime) and Patient 1 (joint incubation and age at infection × 1980-96). For Patient 3, relative probabilities for Saudi Arabia were not as distinct from those for other countries using the lifetime interval: 0.394, 0.360 and 0.378, respectively, for incubation period, age at infection and jointly for incubation and age at infection. However, for this patient Saudi Arabia clearly ranked highest within the 1980-96 period: 0.859, 0.871 and 0.865, respectively, for incubation period, age at infection and jointly for incubation and age at infection. These findings support the hypothesis that human infection with bovine spongiform encephalopathy occurred in Saudi Arabia. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Public Health.
Large Scale Data Analysis and Knowledge Extraction in Communication Data
2017-03-31
this purpose, we developed a novel method the " Correlation Density Ran!C’ which finds probability density distribution of related frequent event on all...which is called " Correlation Density Rank", is developed to derive the community tree from the network. As in the real world, where a network is...Community Structure in Dynamic Social Networks using the Correlation Density Rank," 2014 ASE BigData/SocialCom/Cybersecurity Conference, Stanford
Hydrogen and Sulfur from Hydrogen Sulfide. 5. Anodic Oxidation of Sulfur on Activated Glassy Carbon
1988-12-05
electrolyses of H S can probably be carried out at high rates with modest cell voltages in the range 1-1.5 V. The variation in anode current densities...of H2S from solutions of NaSH in aqueous NaOH was achieved using suitably ac- tivated glassy carbon anodes. Thus electrolyses of H2S can probably be...passivation by using a basic solvent at 850C. Using an H2S-saturated 6M NaOH solution, they conducted electrolyses for extended periods at current densities
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
NASA Technical Reports Server (NTRS)
Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard
1988-01-01
The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.
Exact joint density-current probability function for the asymmetric exclusion process.
Depken, Martin; Stinchcombe, Robin
2004-07-23
We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society
Vanin, Anatoly F.; Burbaev, Dosymzhan Sh.
2011-01-01
The ability of mononuclear dinitrosyl iron commplexes (M-DNICs) with thiolate ligands to act as NO donors and to trigger S-nitrosation of thiols can be explain only in the paradigm of the model of the [Fe+(NO+)2] core ({Fe(NO)2}7 according to the Enemark-Feltham classification). Similarly, the {(RS−)2Fe+(NO+)2}+ structure describing the distribution of unpaired electron density in M-DNIC corresponds to the low-spin (S = 1/2) state with a d7 electron configuration of the iron atom and predominant localization of the unpaired electron on MO(dz2) and the square planar structure of M-DNIC. On the other side, the formation of molecular orbitals of M-DNIC including orbitals of the iron atom, thiolate and nitrosyl ligands results in a transfer of electron density from sulfur atoms to the iron atom and nitrosyl ligands. Under these conditions, the positive charge on the nitrosyl ligands diminishes appreciably, the interaction of the ligands with hydroxyl ions or with thiols slows down and the hydrolysis of nitrosyl ligands and the S-nitrosating effect of the latter are not manifested. Most probably, the S-nitrosating effect of nitrosyl ligands is a result of weak binding of thiolate ligands to the iron atom under conditions favoring destabilization of M-DNIC. PMID:22505886
Vanin, Anatoly F; Burbaev, Dosymzhan Sh
2011-01-01
The ability of mononuclear dinitrosyl iron commplexes (M-DNICs) with thiolate ligands to act as NO donors and to trigger S-nitrosation of thiols can be explain only in the paradigm of the model of the [Fe(+)(NO(+))(2)] core ({Fe(NO)(2)}(7) according to the Enemark-Feltham classification). Similarly, the {(RS(-))(2)Fe(+)(NO(+))(2)}(+) structure describing the distribution of unpaired electron density in M-DNIC corresponds to the low-spin (S = 1/2) state with a d(7) electron configuration of the iron atom and predominant localization of the unpaired electron on MO(d(z2)) and the square planar structure of M-DNIC. On the other side, the formation of molecular orbitals of M-DNIC including orbitals of the iron atom, thiolate and nitrosyl ligands results in a transfer of electron density from sulfur atoms to the iron atom and nitrosyl ligands. Under these conditions, the positive charge on the nitrosyl ligands diminishes appreciably, the interaction of the ligands with hydroxyl ions or with thiols slows down and the hydrolysis of nitrosyl ligands and the S-nitrosating effect of the latter are not manifested. Most probably, the S-nitrosating effect of nitrosyl ligands is a result of weak binding of thiolate ligands to the iron atom under conditions favoring destabilization of M-DNIC.
Spatial Metrics of Tumour Vascular Organisation Predict Radiation Efficacy in a Computational Model
Scott, Jacob G.
2016-01-01
Intratumoural heterogeneity is known to contribute to poor therapeutic response. Variations in oxygen tension in particular have been correlated with changes in radiation response in vitro and at the clinical scale with overall survival. Heterogeneity at the microscopic scale in tumour blood vessel architecture has been described, and is one source of the underlying variations in oxygen tension. We seek to determine whether histologic scale measures of the erratic distribution of blood vessels within a tumour can be used to predict differing radiation response. Using a two-dimensional hybrid cellular automaton model of tumour growth, we evaluate the effect of vessel distribution on cell survival outcomes of simulated radiation therapy. Using the standard equations for the oxygen enhancement ratio for cell survival probability under differing oxygen tensions, we calculate average radiation effect over a range of different vessel densities and organisations. We go on to quantify the vessel distribution heterogeneity and measure spatial organization using Ripley’s L function, a measure designed to detect deviations from complete spatial randomness. We find that under differing regimes of vessel density the correlation coefficient between the measure of spatial organization and radiation effect changes sign. This provides not only a useful way to understand the differences seen in radiation effect for tissues based on vessel architecture, but also an alternate explanation for the vessel normalization hypothesis. PMID:26800503
Multivariate Density Estimation and Remote Sensing
NASA Technical Reports Server (NTRS)
Scott, D. W.
1983-01-01
Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.
A Balanced Approach to Adaptive Probability Density Estimation.
Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy
2017-01-01
Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.
Adaptive detection of noise signal according to Neumann-Pearson criterion
NASA Astrophysics Data System (ADS)
Padiryakov, Y. A.
1985-03-01
Optimum detection according to the Neumann-Pearson criterion is considered in the case of a random Gaussian noise signal, stationary during measurement, and a stationary random Gaussian background interference. Detection is based on two samples, their statistics characterized by estimates of their spectral densities, it being a priori known that sample A from the signal channel is either the sum of signal and interference or interference alone and sample B from the reference interference channel is an interference with the same spectral density as that of the interference in sample A for both hypotheses. The probability of correct detection is maximized on the average, first in the 2N-dimensional space of signal spectral density and interference spectral density readings, by fixing the probability of false alarm at each point so as to stabilize it at a constant level against variation of the interference spectral density. Deterministic decision rules are established. The algorithm is then reduced to equivalent detection in the N-dimensional space of the ratio of sample A readings to sample B readings.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.
2011-01-01
Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Selective oviposition of the mayfly Baetis bicaudatus.
Encalada, Andrea C; Peckarsky, Barbara L
2006-06-01
Selective oviposition can have important consequences for recruitment limitation and population dynamics of organisms with complex life cycles. Temporal and spatial variation in oviposition may be driven by environmental or behavioral constraints. The goals of this study were to: (1) develop an empirical model of the substrate characteristics that best explain observed patterns of oviposition by Baetis bicaudatus (Ephemeroptera), whose females lay eggs under rocks protruding from high-elevation streams in western Colorado; and (2) test experimentally selective oviposition of mayfly females. We surveyed the number and physical characteristics of potential oviposition sites, and counted the number and density of egg masses in different streams of one watershed throughout two consecutive flight seasons. Results of surveys showed that variability in the proportion of protruding rocks with egg masses and the density of egg masses per rock were explained primarily by seasonal and annual variation in hydrology, and variation in geomorphology among streams. Moreover, surveys and experiments showed that females preferred to oviposit under relatively large rocks located in places with high splash associated with fast current, which may provide visual, mechanical or both cues to females. Experiments also showed that high densities of egg masses under certain rocks were caused by rock characteristics rather than behavioral aggregation of ovipositing females. While aggregations of egg masses provided no survival advantage, rocks selected by females had lower probabilities of desiccating during egg incubation. Our data suggest that even when protruding rocks are abundant, not all rocks are used as oviposition sites by females, due to female selectivity and to differences in rock availability within seasons, years, or streams depending on variation in climate and hydrogeomorphology. Therefore, specialized oviposition behavior combined with variation in availability of quality oviposition substrata has the potential to limit recruitment of this species.
Density of American black bears in New Mexico
Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.; Liley, Stewart
2018-01-01
Considering advances in noninvasive genetic sampling and spatially explicit capture–recapture (SECR) models, the New Mexico Department of Game and Fish sought to update their density estimates for American black bear (Ursus americanus) populations in New Mexico, USA, to aide in setting sustainable harvest limits. We estimated black bear density in the Sangre de Cristo, Sandia, and Sacramento Mountains, New Mexico, 2012–2014. We collected hair samples from black bears using hair traps and bear rubs and used a sex marker and a suite of microsatellite loci to individually genotype hair samples. We then estimated density in a SECR framework using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We sampled the populations using 554 hair traps and 117 bear rubs and collected 4,083 hair samples. We identified 725 (367 male, 358 female) individuals. Our density estimates varied from 16.5 bears/100 km2 (95% CI = 11.6–23.5) in the southern Sacramento Mountains to 25.7 bears/100 km2 (95% CI = 13.2–50.1) in the Sandia Mountains. Overall, detection probability at the activity center (g0) was low across all study areas and ranged from 0.00001 to 0.02. The low values of g0 were primarily a result of half of all hair samples for which genotypes were attempted failing to produce a complete genotype. We speculate that the low success we had genotyping hair samples was due to exceedingly high levels of ultraviolet (UV) radiation that degraded the DNA in the hair. Despite sampling difficulties, we were able to produce density estimates with levels of precision comparable to those estimated for black bears elsewhere in the United States.
NASA Astrophysics Data System (ADS)
Helble, Tyler Adam
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.
Timescales of isotropic and anisotropic cluster collapse
NASA Astrophysics Data System (ADS)
Bartelmann, M.; Ehlers, J.; Schneider, P.
1993-12-01
From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.
Density estimates of monarch butterflies overwintering in central Mexico
Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031
Density estimates of monarch butterflies overwintering in central Mexico
Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena
2017-01-01
Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.
Atmospheric densities derived from CHAMP/STAR accelerometer observations
NASA Astrophysics Data System (ADS)
Bruinsma, S.; Tamagnan, D.; Biancale, R.
2004-03-01
The satellite CHAMP carries the accelerometer STAR in its payload and thanks to the GPS and SLR tracking systems accurate orbit positions can be computed. Total atmospheric density values can be retrieved from the STAR measurements, with an absolute uncertainty of 10-15%, under the condition that an accurate radiative force model, satellite macro-model, and STAR instrumental calibration parameters are applied, and that the upper-atmosphere winds are less than 150 m/ s. The STAR calibration parameters (i.e. a bias and a scale factor) of the tangential acceleration were accurately determined using an iterative method, which required the estimation of the gravity field coefficients in several iterations, the first result of which was the EIGEN-1S (Geophys. Res. Lett. 29 (14) (2002) 10.1029) gravity field solution. The procedure to derive atmospheric density values is as follows: (1) a reduced-dynamic CHAMP orbit is computed, the positions of which are used as pseudo-observations, for reference purposes; (2) a dynamic CHAMP orbit is fitted to the pseudo-observations using calibrated STAR measurements, which are saved in a data file containing all necessary information to derive density values; (3) the data file is used to compute density values at each orbit integration step, for which accurate terrestrial coordinates are available. This procedure was applied to 415 days of data over a total period of 21 months, yielding 1.2 million useful observations. The model predictions of DTM-2000 (EGS XXV General Assembly, Nice, France), DTM-94 (J. Geod. 72 (1998) 161) and MSIS-86 (J. Geophys. Res. 92 (1987) 4649) were evaluated by analysing the density ratios (i.e. "observed" to "computed" ratio) globally, and as functions of solar activity, geographical position and season. The global mean of the density ratios showed that the models underestimate density by 10-20%, with an rms of 16-20%. The binning as a function of local time revealed that the diurnal and semi-diurnal components are too strong in the DTM models, while all three models model the latitudinal gradient inaccurately. Using DTM-2000 as a priori, certain model coefficients were re-estimated using the STAR-derived densities, yielding the DTM-STAR test model. The mean and rms of the global density ratios of this preliminary model are 1.00 and 15%, respectively, while the tidal and latitudinal modelling errors become small. This test model is only representative of high solar activity conditions, while the seasonal effect is probably not estimated accurately due to correlation with the solar activity effect. At least one more year of data is required to separate the seasonal effect from the solar activity effect, and data taken under low solar activity conditions must also be assimilated to construct a model representative under all circumstances.
Probabilistic cluster labeling of imagery data
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1980-01-01
The problem of obtaining the probabilities of class labels for the clusters using spectral and spatial information from a given set of labeled patterns and their neighbors is considered. A relationship is developed between class and clusters conditional densities in terms of probabilities of class labels for the clusters. Expressions are presented for updating the a posteriori probabilities of the classes of a pixel using information from its local neighborhood. Fixed-point iteration schemes are developed for obtaining the optimal probabilities of class labels for the clusters. These schemes utilize spatial information and also the probabilities of label imperfections. Experimental results from the processing of remotely sensed multispectral scanner imagery data are presented.
Effect of electromagnetic field on Kordylewski clouds formation
NASA Astrophysics Data System (ADS)
Salnikova, Tatiana; Stepanov, Sergey
2018-05-01
In previous papers the authors suggest a clarification of the phenomenon of appearance-disappearance of Kordylewski clouds - accumulation of cosmic dust mass in the vicinity of the triangle libration points of the Earth-Moon system. Under gravi-tational and light perturbation of the Sun the triangle libration points aren't the points of relative equilibrium. However, there exist the stable periodic motion of the particles, surrounding every of the triangle libration points. Due to this fact we can consider a probabilistic model of the dust clouds formation. These clouds move along the periodical orbits in small vicinity of the point of periodical orbit. To continue this research we suggest a mathematical model to investigate also the electromagnetic influences, arising under consideration of the charged dust particles in the vicinity of the triangle libration points of the Earth-Moon system. In this model we take under consideration the self-unduced force field within the set of charged particles, the probability distribution density evolves according to the Vlasov equation.
NASA Astrophysics Data System (ADS)
Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei
2018-01-01
In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.
Newsvendor problem under complete uncertainty: a case of innovative products.
Gaspars-Wieloch, Helena
2017-01-01
The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker's attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.
Diapir-induced reorientation of Saturn's moon Enceladus.
Nimmo, Francis; Pappalardo, Robert T
2006-06-01
Enceladus is a small icy satellite of Saturn. Its south polar region consists of young, tectonically deformed terrain and has an anomalously high heat flux. This heat flux is probably due to localized tidal dissipation within either the ice shell or the underlying silicate core. The surface deformation is plausibly due to upwelling of low-density material (diapirism) as a result of this tidal heating. Here we show that the current polar location of the hotspot can be explained by reorientation of the satellite's rotation axis because of the presence of a low-density diapir. If the diapir is in the ice shell, then the shell must be relatively thick and maintain significant rigidity (elastic thickness greater than approximately 0.5 km); if the diapir is in the silicate core, then Enceladus cannot possess a global subsurface ocean, because the core must be coupled to the overlying ice for reorientation to occur. The reorientation generates large (approximately 10 MPa) tectonic stress patterns that are compatible with the observed deformation of the south polar region. We predict that the distribution of impact craters on the surface will not show the usual leading hemisphere-trailing hemisphere asymmetry. A low-density diapir also yields a potentially observable negative gravity anomaly.
Anastasiadis, Anastasios; Onal, Bulent; Modi, Pranjal; Turna, Burak; Duvdevani, Mordechai; Timoney, Anthony; Wolf, J Stuart; De La Rosette, Jean
2013-12-01
This study aimed to explore the relationship between stone density and outcomes of percutaneous nephrolithotomy (PCNL) using the Clinical Research Office of the Endourological Society (CROES) PCNL Global Study database. Patients undergoing PCNL treatment were assigned to a low stone density [LSD, ≤ 1000 Hounsfield units (HU)] or high stone density (HSD, > 1000 HU) group based on the radiological density of the primary renal stone. Preoperative characteristics and outcomes were compared in the two groups. Retreatment for residual stones was more frequent in the LSD group. The overall stone-free rate achieved was higher in the HSD group (79.3% vs 74.8%, p = 0.113). By univariate regression analysis, the probability of achieving a stone-free outcome peaked at approximately 1250 HU. Below or above this density resulted in lower treatment success, particularly at very low HU values. With increasing radiological stone density, operating time decreased to a minimum at approximately 1000 HU, then increased with further increase in stone density. Multivariate non-linear regression analysis showed a similar relationship between the probability of a stone-free outcome and stone density. Higher treatment success rates were found with low stone burden, pelvic stone location and use of pneumatic lithotripsy. Very low and high stone densities are associated with lower rates of treatment success and longer operating time in PCNL. Preoperative assessment of stone density may help in the selection of treatment modality for patients with renal stones.
The propagator of stochastic electrodynamics
NASA Astrophysics Data System (ADS)
Cavalleri, G.
1981-01-01
The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.
Dual Approach To Superquantile Estimation And Applications To Density Fitting
2016-06-01
incorporate additional constraints to improve the fidelity of density estimates in tail regions. We limit our investigation to data with heavy tails, where...samples of various heavy -tailed distributions. 14. SUBJECT TERMS probability density estimation, epi-splines, optimization, risk quantification...limit our investigation to data with heavy tails, where risk quantification is typically the most difficult. Demonstrations are provided in the form of
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
Using independent component analysis for electrical impedance tomography
NASA Astrophysics Data System (ADS)
Yan, Peimin; Mo, Yulong
2004-05-01
Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.
Origin of the moon - Capture by gas drag of the earth's primordial atmosphere
NASA Astrophysics Data System (ADS)
Nakazawa, K.; Komuro, T.; Hayashi, C.
1983-06-01
The novel lunar formation scenario proposed is an extension of planetary formation process studies suggesting that the earth originated in a gaseous solar nebula. Attention is given to a series of dynamical processes in which a low energy planetesimal is trapped within the terrestrial Hill sphere under circumstances in which the primordial atmosphere's gas density gradually decreases. An unbound planetesimal entering the Hill sphere would have had to dissipate its kinetic energy and then come into a bound orbit, before escaping from the Hill sphere, without falling onto the earth's surface. The kinetic energy dissipation condition is considered through the calculation of the solar gravity and atmospheric gas drag effects on the planetesimal's orbital motion. The result obtained shows that a low energy planetesimal of less than lunar mass can be trapped in the Hill sphere with a high probability, if it enters at those stages before atmospheric density has decreased to about 1/50th of the initial value.
NASA Astrophysics Data System (ADS)
Rognlien, Thomas; Rensink, Marvin
2016-10-01
Transport simulations for the edge plasma of tokamaks and other magnetic fusion devices requires the coupling of plasma and recycling or injected neutral gas. There are various neutral models used for this purpose, e.g., atomic fluid model, a Monte Carlo particle models, transition/escape probability methods, and semi-analytic models. While the Monte Carlo method is generally viewed as the most accurate, it is time consuming, which becomes even more demanding for device simulations of high densities and size typical of fusion power plants because the neutral collisional mean-free path becomes very small. Here we examine the behavior of an extended fluid neutral model for hydrogen that includes both atoms and molecules, which easily includes nonlinear neutral-neutral collision effects. In addition to the strong charge-exchange between hydrogen atoms and ions, elastic scattering is included among all species. Comparisons are made with the DEGAS 2 Monte Carlo code. Work performed for U.S. DoE by LLNL under Contract DE-AC52-07NA27344.
Failure Maps for Rectangular 17-4PH Stainless Steel Sandwiched Foam Panels
NASA Technical Reports Server (NTRS)
Raj, S. V.; Ghosn, L. J.
2007-01-01
A new and innovative concept is proposed for designing lightweight fan blades for aircraft engines using commercially available 17-4PH precipitation hardened stainless steel. Rotating fan blades in aircraft engines experience a complex loading state consisting of combinations of centrifugal, distributed pressure and torsional loads. Theoretical failure plastic collapse maps, showing plots of the foam relative density versus face sheet thickness, t, normalized by the fan blade span length, L, have been generated for rectangular 17-4PH sandwiched foam panels under these three loading modes assuming three failure plastic collapse modes. These maps show that the 17-4PH sandwiched foam panels can fail by either the yielding of the face sheets, yielding of the foam core or wrinkling of the face sheets depending on foam relative density, the magnitude of t/L and the loading mode. The design envelop of a generic fan blade is superimposed on the maps to provide valuable insights on the probable failure modes in a sandwiched foam fan blade.
NASA Astrophysics Data System (ADS)
Dioguardi, Fabio; Dellino, Pierfrancesco
2017-04-01
Dilute pyroclastic density currents (DPDC) are ground-hugging turbulent gas-particle flows that move down volcano slopes under the combined action of density contrast and gravity. DPDCs are dangerous for human lives and infrastructures both because they exert a dynamic pressure in their direction of motion and transport volcanic ash particles, which remain in the atmosphere during the waning stage and after the passage of a DPDC. Deposits formed by the passage of a DPDC show peculiar characteristics that can be linked to flow field variables with sedimentological models. Here we present PYFLOW_2.0, a significantly improved version of the code of Dioguardi and Dellino (2014) that was already extensively used for the hazard assessment of DPDCs at Campi Flegrei and Vesuvius (Italy). In the latest new version the code structure, the computation times and the data input method have been updated and improved. A set of shape-dependent drag laws have been implemented as to better estimate the aerodynamic drag of particles transported and deposited by the flow. A depositional model for calculating the deposition time and rate of the ash and lapilli layer formed by the pyroclastic flow has also been included. This model links deposit (e.g. componentry, grainsize) to flow characteristics (e.g. flow average density and shear velocity), the latter either calculated by the code itself or given in input by the user. The deposition rate is calculated by summing the contributions of each grainsize class of all components constituting the deposit (e.g. juvenile particles, crystals, etc.), which are in turn computed as a function of particle density, terminal velocity, concentration and deposition probability. Here we apply the concept of deposition probability, previously introduced for estimating the deposition rates of turbidity currents (Stow and Bowen, 1980), to DPDCs, although with a different approach, i.e. starting from what is observed in the deposit (e.g. the weight fractions ratios between the different grainsize classes). In this way, more realistic estimates of the deposition rate can be obtained, as the deposition probability of different grainsize constituting the DPDC deposit could be different and not necessarily equal to unity. Calculations of the deposition rates of large-scale experiments, previously computed with different methods, have been performed as experimental validation and are presented. Results of model application to DPDCs and turbidity currents will also be presented. Dioguardi, F, and P. Dellino (2014), PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data, Powder Technol., 66, 200-210, doi:10.1016/j.cageo.2014.01.013 Stow, D. A. V., and A. J. Bowen (1980), A physical model for the transport and sorting of fine-grained sediment by turbidity currents, Sedimentology, 27, 31-46
Probabilistic mapping of flood-induced backscatter changes in SAR time series
NASA Astrophysics Data System (ADS)
Schlaffer, Stefan; Chini, Marco; Giustarini, Laura; Matgen, Patrick
2017-04-01
The information content of flood extent maps can be increased considerably by including information on the uncertainty of the flood area delineation. This additional information can be of benefit in flood forecasting and monitoring. Furthermore, flood probability maps can be converted to binary maps showing flooded and non-flooded areas by applying a threshold probability value pF = 0.5. In this study, a probabilistic change detection approach for flood mapping based on synthetic aperture radar (SAR) time series is proposed. For this purpose, conditional probability density functions (PDFs) for land and open water surfaces were estimated from ENVISAT ASAR Wide Swath (WS) time series containing >600 images using a reference mask of permanent water bodies. A pixel-wise harmonic model was used to account for seasonality in backscatter from land areas caused by soil moisture and vegetation dynamics. The approach was evaluated for a large-scale flood event along the River Severn, United Kingdom. The retrieved flood probability maps were compared to a reference flood mask derived from high-resolution aerial imagery by means of reliability diagrams. The obtained performance measures indicate both high reliability and confidence although there was a slight under-estimation of the flood extent, which may in part be attributed to topographically induced radar shadows along the edges of the floodplain. Furthermore, the results highlight the importance of local incidence angle for the separability between flooded and non-flooded areas as specular reflection properties of open water surfaces increase with a more oblique viewing geometry.
NASA Astrophysics Data System (ADS)
Dufty, J. W.
1984-09-01
Diffusion of a tagged particle in a fluid with uniform shear flow is described. The continuity equation for the probability density describing the position of the tagged particle is considered. The diffusion tensor is identified by expanding the irreversible part of the probability current to first order in the gradient of the probability density, but with no restriction on the shear rate. The tensor is expressed as the time integral of a nonequilibrium autocorrelation function for the velocity of the tagged particle in its local fluid rest frame, generalizing the Green-Kubo expression to the nonequilibrium state. The tensor is evaluated from results obtained previously for the velocity autocorrelation function that are exact for Maxwell molecules in the Boltzmann limit. The effects of viscous heating are included and the dependence on frequency and shear rate is displayed explicitly. The mode-coupling contributions to the frequency and shear-rate dependent diffusion tensor are calculated.
Nonequilibrium dynamics of a pure dry friction model subjected to colored noise
NASA Astrophysics Data System (ADS)
Geffert, Paul M.; Just, Wolfram
2017-06-01
We investigate the impact of noise on a two-dimensional simple paradigmatic piecewise-smooth dynamical system. For that purpose, we consider the motion of a particle subjected to dry friction and colored noise. The finite correlation time of the noise provides an additional dimension in phase space, causes a nontrivial probability current, and establishes a proper nonequilibrium regime. Furthermore, the setup allows for the study of stick-slip phenomena, which show up as a singular component in the stationary probability density. Analytic insight can be provided by application of the unified colored noise approximation, developed by Jung and Hänggi [Phys. Rev. A 35, 4464(R) (1987), 10.1103/PhysRevA.35.4464]. The analysis of probability currents and of power spectral densities underpins the observed stick-slip transition, which is related with a critical value of the noise correlation time.
Nonequilibrium dynamics of a pure dry friction model subjected to colored noise.
Geffert, Paul M; Just, Wolfram
2017-06-01
We investigate the impact of noise on a two-dimensional simple paradigmatic piecewise-smooth dynamical system. For that purpose, we consider the motion of a particle subjected to dry friction and colored noise. The finite correlation time of the noise provides an additional dimension in phase space, causes a nontrivial probability current, and establishes a proper nonequilibrium regime. Furthermore, the setup allows for the study of stick-slip phenomena, which show up as a singular component in the stationary probability density. Analytic insight can be provided by application of the unified colored noise approximation, developed by Jung and Hänggi [Phys. Rev. A 35, 4464(R) (1987)0556-279110.1103/PhysRevA.35.4464]. The analysis of probability currents and of power spectral densities underpins the observed stick-slip transition, which is related with a critical value of the noise correlation time.