NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
Mathematical Model to estimate the wind power using four-parameter Burr distribution
NASA Astrophysics Data System (ADS)
Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu
2018-03-01
When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.
Positive phase space distributions and uncertainty relations
NASA Technical Reports Server (NTRS)
Kruger, Jan
1993-01-01
In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.
Probability distributions of the electroencephalogram envelope of preterm infants.
Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro
2015-06-01
To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Diffusion of active chiral particles
NASA Astrophysics Data System (ADS)
Sevilla, Francisco J.
2016-12-01
The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korhonen, Marko; Lee, Eunghyun
2014-01-15
We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle'smore » position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.« less
Option volatility and the acceleration Lagrangian
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang
2014-01-01
This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha
The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
Shape of growth-rate distribution determines the type of Non-Gibrat’s Property
NASA Astrophysics Data System (ADS)
Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki
2011-11-01
In this study, the authors examine exhaustive business data on Japanese firms, which cover nearly all companies in the mid- and large-scale ranges in terms of firm size, to reach several key findings on profits/sales distribution and business growth trends. Here, profits denote net profits. First, detailed balance is observed not only in profits data but also in sales data. Furthermore, the growth-rate distribution of sales has wider tails than the linear growth-rate distribution of profits in log-log scale. On the one hand, in the mid-scale range of profits, the probability of positive growth decreases and the probability of negative growth increases symmetrically as the initial value increases. This is called Non-Gibrat’s First Property. On the other hand, in the mid-scale range of sales, the probability of positive growth decreases as the initial value increases, while the probability of negative growth hardly changes. This is called Non-Gibrat’s Second Property. Under detailed balance, Non-Gibrat’s First and Second Properties are analytically derived from the linear and quadratic growth-rate distributions in log-log scale, respectively. In both cases, the log-normal distribution is inferred from Non-Gibrat’s Properties and detailed balance. These analytic results are verified by empirical data. Consequently, this clarifies the notion that the difference in shapes between growth-rate distributions of sales and profits is closely related to the difference between the two Non-Gibrat’s Properties in the mid-scale range.
The bingo model of survivorship: 1. probabilistic aspects.
Murphy, E A; Trojak, J E; Hou, W; Rohde, C A
1981-01-01
A "bingo" model is one in which the pattern of survival of a system is determined by whichever of several components, each with its own particular distribution for survival, fails first. The model is motivated by the study of lifespan in animals. A number of properties of such systems are discussed in general. They include the use of a special criterion of skewness that probably corresponds more closely than traditional measures to what the eye observes in casually inspecting data. This criterion is the ratio, r(h), of the probability density at a point an arbitrary distance, h, above the mode to that an equal distance below the mode. If this ratio is positive for all positive arguments, the distribution is considered positively asymmetrical and conversely. Details of the bingo model are worked out for several types of base distributions: the rectangular, the triangular, the logistic, and by numerical methods, the normal, lognormal, and gamma.
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Stylized facts in internal rates of return on stock index and its derivative transactions
NASA Astrophysics Data System (ADS)
Pichl, Lukáš; Kaizoji, Taisei; Yamano, Takuya
2007-08-01
Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in normalized log returns, the probability distributions for such single asset encounters incorporate the time factor by means of the internal rate of return, defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting difference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be useful in deducing the type of investment strategy from trading revenues of small portfolio investors.
The propagator of stochastic electrodynamics
NASA Astrophysics Data System (ADS)
Cavalleri, G.
1981-01-01
The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.
Representation of complex probabilities and complex Gibbs sampling
NASA Astrophysics Data System (ADS)
Salcedo, Lorenzo Luis
2018-03-01
Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
NASA Astrophysics Data System (ADS)
Li, Y.; Gong, H.; Zhu, L.; Guo, L.; Gao, M.; Zhou, C.
2016-12-01
Continuous over-exploitation of groundwater causes dramatic drawdown, and leads to regional land subsidence in the Huairou Emergency Water Resources region, which is located in the up-middle part of the Chaobai river basin of Beijing. Owing to the spatial heterogeneity of strata's lithofacies of the alluvial fan, ground deformation has no significant positive correlation with groundwater drawdown, and one of the challenges ahead is to quantify the spatial distribution of strata's lithofacies. The transition probability geostatistics approach provides potential for characterizing the distribution of heterogeneous lithofacies in the subsurface. Combined the thickness of clay layer extracted from the simulation, with deformation field acquired from PS-InSAR technology, the influence of strata's lithofacies on land subsidence can be analyzed quantitatively. The strata's lithofacies derived from borehole data were generalized into four categories and their probability distribution in the observe space was mined by using the transition probability geostatistics, of which clay was the predominant compressible material. Geologically plausible realizations of lithofacies distribution were produced, accounting for complex heterogeneity in alluvial plain. At a particular probability level of more than 40 percent, the volume of clay defined was 55 percent of the total volume of strata's lithofacies. This level, equaling nearly the volume of compressible clay derived from the geostatistics, was thus chosen to represent the boundary between compressible and uncompressible material. The method incorporates statistical geological information, such as distribution proportions, average lengths and juxtaposition tendencies of geological types, mainly derived from borehole data and expert knowledge, into the Markov chain model of transition probability. Some similarities of patterns were indicated between the spatial distribution of deformation field and clay layer. In the area with roughly similar water table decline, locations in the subsurface having a higher probability for the existence of compressible material occur more than that in the location with a lower probability. Such estimate of spatial probability distribution is useful to analyze the uncertainty of land subsidence.
Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard
2007-01-10
The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence
NASA Astrophysics Data System (ADS)
Bandi, Mahesh M.; Connaughton, Colm
2008-03-01
We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig’s XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.
Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence.
Bandi, Mahesh M; Connaughton, Colm
2008-03-01
We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig's XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Goschy, Harriet; Bakos, Sarolta; Müller, Hermann J; Zehetleitner, Michael
2014-01-01
Targets in a visual search task are detected faster if they appear in a probable target region as compared to a less probable target region, an effect which has been termed "probability cueing." The present study investigated whether probability cueing cannot only speed up target detection, but also minimize distraction by distractors in probable distractor regions as compared to distractors in less probable distractor regions. To this end, three visual search experiments with a salient, but task-irrelevant, distractor ("additional singleton") were conducted. Experiment 1 demonstrated that observers can utilize uneven spatial distractor distributions to selectively reduce interference by distractors in frequent distractor regions as compared to distractors in rare distractor regions. Experiments 2 and 3 showed that intertrial facilitation, i.e., distractor position repetitions, and statistical learning (independent of distractor position repetitions) both contribute to the probability cueing effect for distractor locations. Taken together, the present results demonstrate that probability cueing of distractor locations has the potential to serve as a strong attentional cue for the shielding of likely distractor locations.
Probability of success for phase III after exploratory biomarker analysis in phase II.
Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver
2017-05-01
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.
Product of Ginibre matrices: Fuss-Catalan and Raney distributions
NASA Astrophysics Data System (ADS)
Penson, Karol A.; Życzkowski, Karol
2011-06-01
Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions Ps(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions Ps(x) in terms of a combination of s hypergeometric functions of the type sFs-1. The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.
Product of Ginibre matrices: Fuss-Catalan and Raney distributions.
Penson, Karol A; Zyczkowski, Karol
2011-06-01
Squared singular values of a product of s square random Ginibre matrices are asymptotically characterized by probability distributions P(s)(x), such that their moments are equal to the Fuss-Catalan numbers of order s. We find a representation of the Fuss-Catalan distributions P(s)(x) in terms of a combination of s hypergeometric functions of the type (s)F(s-1). The explicit formula derived here is exact for an arbitrary positive integer s, and for s=1 it reduces to the Marchenko-Pastur distribution. Using similar techniques, involving the Mellin transform and the Meijer G function, we find exact expressions for the Raney probability distributions, the moments of which are given by a two-parameter generalization of the Fuss-Catalan numbers. These distributions can also be considered as a two-parameter generalization of the Wigner semicircle law.
ERIC Educational Resources Information Center
Bensman, Stephen J.
2000-01-01
This speculative historiographic essay attempts to fix the present position of library and information science within the context of the probabilistic revolution that has been encompassing all of science. Comprises a guide to statistical research in library and information science, discussing skewed distributions, biostatistics, stochastic models,…
Wijeysundera, Duminda N; Austin, Peter C; Hux, Janet E; Beattie, W Scott; Laupacis, Andreas
2009-01-01
Randomized trials generally use "frequentist" statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004. We used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P<0.05 in favor of the intervention were deemed "positive"; otherwise, they were "negative." We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects. Of 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio<1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies. Bayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.
Modeling highway travel time distribution with conditional probability models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less
Event-related potential studies of outcome processing and feedback-guided learning.
San Martín, René
2012-01-01
In order to control behavior in an adaptive manner the brain has to learn how some situations and actions predict positive or negative outcomes. During the last decade cognitive neuroscientists have shown that the brain is able to evaluate and learn from outcomes within a few hundred milliseconds of their occurrence. This research has been primarily focused on the feedback-related negativity (FRN) and the P3, two event-related potential (ERP) components that are elicited by outcomes. The FRN is a frontally distributed negative-polarity ERP component that typically reaches its maximal amplitude 250 ms after outcome presentation and tends to be larger for negative than for positive outcomes. The FRN has been associated with activity in the anterior cingulate cortex (ACC). The P3 (~300-600 ms) is a parietally distributed positive-polarity ERP component that tends to be larger for large magnitude than for small magnitude outcomes. The neural sources of the P3 are probably distributed over different regions of the cortex. This paper examines the theories that have been proposed to explain the functional role of these two ERP components during outcome processing. Special attention is paid to extant literature addressing how these ERP components are modulated by outcome valence (negative vs. positive), outcome magnitude (large vs. small), outcome probability (unlikely vs. likely), and behavioral adjustment. The literature offers few generalizable conclusions, but is beset with a number of inconsistencies across studies. This paper discusses the potential reasons for these inconsistencies and points out some challenges that probably will shape the field over the next decade.
Messner, Michael J; Berger, Philip; Javier, Julie
2017-06-01
Public water systems (PWSs) in the United States generate total coliform (TC) and Escherichia coli (EC) monitoring data, as required by the Total Coliform Rule (TCR). We analyzed data generated in 2011 by approximately 38,000 small (serving fewer than 4101 individuals) undisinfected public water systems (PWSs). We used statistical modeling to characterize a distribution of TC detection probabilities for each of nine groupings of PWSs based on system type (community, non-transient non-community, and transient non-community) and population served (less than 101, 101-1000 and 1001-4100 people). We found that among PWS types sampled in 2011, on average, undisinfected transient PWSs test positive for TC 4.3% of the time as compared with 3% for undisinfected non-transient PWSs and 2.5% for undisinfected community PWSs. Within each type of PWS, the smaller systems have higher median TC detection than the larger systems. All TC-positive samples were assayed for EC. Among TC-positive samples from small undisinfected PWSs, EC is detected in about 5% of samples, regardless of PWS type or size. We evaluated the upper tail of the TC detection probability distributions and found that significant percentages of some system types have high TC detection probabilities. For example, assuming the systems providing data are nationally-representative, then 5.0% of the ∼50,000 small undisinfected transient PWSs in the U.S. have TC detection probabilities of 20% or more. Communities with such high TC detection probabilities may have elevated risk of acute gastrointestinal (AGI) illness - perhaps as great or greater than the attributable risk to drinking water (6-22%) calculated for 14 Wisconsin community PWSs with much lower TC detection probabilities (about 2.3%, Borchardt et al., 2012). Published by Elsevier GmbH.
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian
2013-09-01
The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.
Distribution of chirality in the quantum walk: Markov process and entanglement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanelli, Alejandro
The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible tomore » find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.« less
NEWTONP - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.
Voronoi cell patterns: Theoretical model and applications
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2011-11-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.
Voronoi Cell Patterns: theoretical model and application to submonolayer growth
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2012-02-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.
On the issues of probability distribution of GPS carrier phase observations
NASA Astrophysics Data System (ADS)
Luo, X.; Mayer, M.; Heck, B.
2009-04-01
In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS
NASA Astrophysics Data System (ADS)
Sabarish, R. Mani; Narasimhan, R.; Chandhru, A. R.; Suribabu, C. R.; Sudharsan, J.; Nithiyanantham, S.
2017-05-01
In the design of irrigation and other hydraulic structures, evaluating the magnitude of extreme rainfall for a specific probability of occurrence is of much importance. The capacity of such structures is usually designed to cater to the probability of occurrence of extreme rainfall during its lifetime. In this study, an extreme value analysis of rainfall for Tiruchirapalli City in Tamil Nadu was carried out using 100 years of rainfall data. Statistical methods were used in the analysis. The best-fit probability distribution was evaluated for 1, 2, 3, 4 and 5 days of continuous maximum rainfall. The goodness of fit was evaluated using Chi-square test. The results of the goodness-of-fit tests indicate that log-Pearson type III method is the overall best-fit probability distribution for 1-day maximum rainfall and consecutive 2-, 3-, 4-, 5- and 6-day maximum rainfall series of Tiruchirapalli. To be reliable, the forecasted maximum rainfalls for the selected return periods are evaluated in comparison with the results of the plotting position.
Predictions of malaria vector distribution in Belize based on multispectral satellite data.
Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J
1996-03-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Predictions of malaria vector distribution in Belize based on multispectral satellite data
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.
1996-01-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, Joseph; Polin, Abigail; Lommen, Andrea
2014-03-20
The steadily improving sensitivity of pulsar timing arrays (PTAs) suggests that gravitational waves (GWs) from supermassive black hole binary (SMBHB) systems in the nearby universe will be detectable sometime during the next decade. Currently, PTAs assume an equal probability of detection from every sky position, but as evidence grows for a non-isotropic distribution of sources, is there a most likely sky position for a detectable single source of GWs? In this paper, a collection of Galactic catalogs is used to calculate various metrics related to the detectability of a single GW source resolvable above a GW background, assuming that everymore » galaxy has the same probability of containing an SMBHB. Our analyses of these data reveal small probabilities that one of these sources is currently in the PTA band, but as sensitivity is improved regions of consistent probability density are found in predictable locations, specifically around local galaxy clusters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebl, Jakob, E-mail: jakob.liebl@medaustron.at; Francis H. Burr Proton Therapy Center, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114; Department of Therapeutic Radiology and Oncology, Medical University of Graz, 8036 Graz
2014-09-15
Purpose: Proton radiotherapy allows radiation treatment delivery with high dose gradients. The nature of such dose distributions increases the influence of patient positioning uncertainties on their fidelity when compared to photon radiotherapy. The present work quantitatively analyzes the influence of setup uncertainties on proton range and dose distributions. Methods: Thirty-eight clinical passive scattering treatment fields for small lesions in the head were studied. Dose distributions for shifted and rotated patient positions were Monte Carlo-simulated. Proton range uncertainties at the 50%- and 90%-dose falloff position were calculated considering 18 arbitrary combinations of maximal patient position shifts and rotations for two patientmore » positioning methods. Normal tissue complication probabilities (NTCPs), equivalent uniform doses (EUDs), and tumor control probabilities (TCPs) were studied for organs at risk (OARs) and target volumes of eight patients. Results: The authors identified a median 1σ proton range uncertainty at the 50%-dose falloff of 2.8 mm for anatomy-based patient positioning and 1.6 mm for fiducial-based patient positioning as well as 7.2 and 5.8 mm for the 90%-dose falloff position, respectively. These range uncertainties were correlated to heterogeneity indices (HIs) calculated for each treatment field (38% < R{sup 2} < 50%). A NTCP increase of more than 10% (absolute) was observed for less than 2.9% (anatomy-based positioning) and 1.2% (fiducial-based positioning) of the studied OARs and patient shifts. For target volumes TCP decreases by more than 10% (absolute) occurred in less than 2.2% of the considered treatment scenarios for anatomy-based patient positioning and were nonexistent for fiducial-based patient positioning. EUD changes for target volumes were up to 35% (anatomy-based positioning) and 16% (fiducial-based positioning). Conclusions: The influence of patient positioning uncertainties on proton range in therapy of small lesions in the human brain as well as target and OAR dosimetry were studied. Observed range uncertainties were correlated with HIs. The clinical practice of using multiple fields with smeared compensators while avoiding distal OAR sparing is considered to be safe.« less
Liebl, Jakob; Paganetti, Harald; Zhu, Mingyao; Winey, Brian A.
2014-01-01
Purpose: Proton radiotherapy allows radiation treatment delivery with high dose gradients. The nature of such dose distributions increases the influence of patient positioning uncertainties on their fidelity when compared to photon radiotherapy. The present work quantitatively analyzes the influence of setup uncertainties on proton range and dose distributions. Methods: Thirty-eight clinical passive scattering treatment fields for small lesions in the head were studied. Dose distributions for shifted and rotated patient positions were Monte Carlo-simulated. Proton range uncertainties at the 50%- and 90%-dose falloff position were calculated considering 18 arbitrary combinations of maximal patient position shifts and rotations for two patient positioning methods. Normal tissue complication probabilities (NTCPs), equivalent uniform doses (EUDs), and tumor control probabilities (TCPs) were studied for organs at risk (OARs) and target volumes of eight patients. Results: The authors identified a median 1σ proton range uncertainty at the 50%-dose falloff of 2.8 mm for anatomy-based patient positioning and 1.6 mm for fiducial-based patient positioning as well as 7.2 and 5.8 mm for the 90%-dose falloff position, respectively. These range uncertainties were correlated to heterogeneity indices (HIs) calculated for each treatment field (38% < R2 < 50%). A NTCP increase of more than 10% (absolute) was observed for less than 2.9% (anatomy-based positioning) and 1.2% (fiducial-based positioning) of the studied OARs and patient shifts. For target volumes TCP decreases by more than 10% (absolute) occurred in less than 2.2% of the considered treatment scenarios for anatomy-based patient positioning and were nonexistent for fiducial-based patient positioning. EUD changes for target volumes were up to 35% (anatomy-based positioning) and 16% (fiducial-based positioning). Conclusions: The influence of patient positioning uncertainties on proton range in therapy of small lesions in the human brain as well as target and OAR dosimetry were studied. Observed range uncertainties were correlated with HIs. The clinical practice of using multiple fields with smeared compensators while avoiding distal OAR sparing is considered to be safe. PMID:25186386
Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi
2016-11-03
Copy Number Variation (CNV) is envisaged to be a major source of large structural variations in the human genome. In recent years, many studies apply Next Generation Sequencing (NGS) data for the CNV detection. However, still there is a necessity to invent more accurate computational tools. In this study, mate pair NGS data are used for the CNV detection in a Hidden Markov Model (HMM). The proposed HMM has position specific emission probabilities, i.e. a Gaussian mixture distribution. Each component in the Gaussian mixture distribution captures a different type of aberration that is observed in the mate pairs, after being mapped to the reference genome. These aberrations may include any increase (decrease) in the insertion size or change in the direction of mate pairs that are mapped to the reference genome. This HMM with Position-Specific Emission probabilities (PSE-HMM) is utilized for the genome-wide detection of deletions and tandem duplications. The performance of PSE-HMM is evaluated on a simulated dataset and also on a real data of a Yoruban HapMap individual, NA18507. PSE-HMM is effective in taking observation dependencies into account and reaches a high accuracy in detecting genome-wide CNVs. MATLAB programs are available at http://bs.ipm.ir/softwares/PSE-HMM/ .
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
Unifying distribution functions: some lesser known distributions.
Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P
2008-08-01
We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.
The tensor distribution function.
Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M
2009-01-01
Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.
Reward skewness coding in the insula independent of probability and loss
Tobler, Philippe N.
2011-01-01
Rewards in the natural environment are rarely predicted with complete certainty. Uncertainty relating to future rewards has typically been defined as the variance of the potential outcomes. However, the asymmetry of predicted reward distributions, known as skewness, constitutes a distinct but neuroscientifically underexplored risk term that may also have an impact on preference. By changing only reward magnitudes, we study skewness processing in equiprobable ternary lotteries involving only gains and constant probabilities, thus excluding probability distortion or loss aversion as mechanisms for skewness preference formation. We show that individual preferences are sensitive to not only the mean and variance but also to the skewness of predicted reward distributions. Using neuroimaging, we show that the insula, a structure previously implicated in the processing of reward-related uncertainty, responds to the skewness of predicted reward distributions. Some insula responses increased in a monotonic fashion with skewness (irrespective of individual skewness preferences), whereas others were similarly elevated to both negative and positive as opposed to no reward skew. These data support the notion that the asymmetry of reward distributions is processed in the brain and, taken together with replicated findings of mean coding in the striatum and variance coding in the cingulate, suggest that the brain codes distinct aspects of reward distributions in a distributed fashion. PMID:21849610
Stochastic models for the Trojan Y-Chromosome eradication strategy of an invasive species.
Wang, Xueying; Walton, Jay R; Parshad, Rana D
2016-01-01
The Trojan Y-Chromosome (TYC) strategy, an autocidal genetic biocontrol method, has been proposed to eliminate invasive alien species. In this work, we develop a Markov jump process model for this strategy, and we verify that there is a positive probability for wild-type females going extinct within a finite time. Moreover, when sex-reversed Trojan females are introduced at a constant population size, we formulate a stochastic differential equation (SDE) model as an approximation to the proposed Markov jump process model. Using the SDE model, we investigate the probability distribution and expectation of the extinction time of wild-type females by solving Kolmogorov equations associated with these statistics. The results indicate how the probability distribution and expectation of the extinction time are shaped by the initial conditions and the model parameters.
Predicting the cosmological constant with the scale-factor cutoff measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Simone, Andrea; Guth, Alan H.; Salem, Michael P.
2008-09-15
It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less
Measures for a multidimensional multiverse
NASA Astrophysics Data System (ADS)
Chung, Hyeyoun
2015-04-01
We explore the phenomenological implications of generalizing the causal patch and fat geodesic measures to a multidimensional multiverse, where the vacua can have differing numbers of large dimensions. We consider a simple model in which the vacua are nucleated from a D -dimensional parent spacetime through dynamical compactification of the extra dimensions, and compute the geometric contribution to the probability distribution of observations within the multiverse for each measure. We then study how the shape of this probability distribution depends on the time scales for the existence of observers, for vacuum domination, and for curvature domination (tobs,tΛ , and tc, respectively.) In this work we restrict ourselves to bubbles with positive cosmological constant, Λ . We find that in the case of the causal patch cutoff, when the bubble universes have p +1 large spatial dimensions with p ≥2 , the shape of the probability distribution is such that we obtain the coincidence of time scales tobs˜tΛ˜tc . Moreover, the size of the cosmological constant is related to the size of the landscape. However, the exact shape of the probability distribution is different in the case p =2 , compared to p ≥3 . In the case of the fat geodesic measure, the result is even more robust: the shape of the probability distribution is the same for all p ≥2 , and we once again obtain the coincidence tobs˜tΛ˜tc . These results require only very mild conditions on the prior probability of the distribution of vacua in the landscape. Our work shows that the observed double coincidence of time scales is a robust prediction even when the multiverse is generalized to be multidimensional; that this coincidence is not a consequence of our particular Universe being (3 +1 )-dimensional; and that this observable cannot be used to preferentially select one measure over another in a multidimensional multiverse.
Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.
2001-01-01
Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.
Crovelli, R.A.; Balay, R.H.
1991-01-01
A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
The living Drake equation of the Tau Zero Foundation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-03-01
The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
Williams, Michael S; Cao, Yong; Ebel, Eric D
2013-07-15
Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.
Determination of the mass of globular cluster X-ray sources
NASA Technical Reports Server (NTRS)
Grindlay, J. E.; Hertz, P.; Steiner, J. E.; Murray, S. S.; Lightman, A. P.
1984-01-01
The precise positions of the luminous X-ray sources in eight globular clusters have been measured with the Einstein X-Ray Observatory. When combined with similarly precise measurements of the dynamical centers and core radii of the globular clusters, the distribution of the X-ray source mass is determined to be in the range 0.9-1.9 solar mass. The X-ray source positions and the detailed optical studies indicate that (1) the sources are probably all of similar mass, (2) the gravitational potentials in these high-central density clusters are relatively smooth and isothermal, and (3) the X-ray sources are compact binaries and are probably formed by tidal capture.
Roubal, George; Atlas, Ronald M.
1978-01-01
Hydrocarbon-utilizing microorganisms were enumerated from Alaskan continental shelf areas by using plate counts and a new most-probable-number procedure based on mineralization of 14C-labeled hydrocarbons. Hydrocarbon utilizers were ubiquitously distributed, with no significant overall concentration differences between sampling regions or between surface water and sediment samples. There were, however, significant seasonal differences in numbers of hydrocarbon utilizers. Distribution of hydrocarbon utilizers within Cook Inlet was positively correlated with occurrence of hydrocarbons in the environment. Hydrocarbon biodegradation potentials were measured by using 14C-radiolabeled hydrocarbon-spiked crude oil. There was no significant correlation between numbers of hydrocarbon utilizers and hydrocarbon biodegradation potentials. The biodegradation potentials showed large seasonal variations in the Beaufort Sea, probably due to seasonal depletion of available nutrients. Non-nutrient-limited biodegradation potentials followed the order hexadecane > naphthalene ≫ pristane > benzanthracene. In Cook Inlet, biodegradation potentials for hexadecane and naphthalene were dependent on availability of inorganic nutrients. Biodegradation potentials for pristane and benzanthracene were restricted, probably by resistance to attack by available enzymes in the indigenous population. PMID:655706
Methods for fitting a parametric probability distribution to most probable number data.
Williams, Michael S; Ebel, Eric D
2012-07-02
Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two data sets that represent Salmonella and Campylobacter concentrations on chicken carcasses. The results demonstrate a bias in the maximum likelihood estimator that increases with reductions in average concentration. The Bayesian method provided unbiased estimates of the concentration distribution parameters for all data sets. We provide computer code for the Bayesian fitting method. Published by Elsevier B.V.
Evaluation of an Ensemble Dispersion Calculation.
NASA Astrophysics Data System (ADS)
Draxler, Roland R.
2003-02-01
A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.
NASA Astrophysics Data System (ADS)
Sallah, M.
2014-03-01
The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
Applications of the first digit law to measure correlations.
Gramm, R; Yost, J; Su, Q; Grobe, R
2017-04-01
The quasiempirical Benford law predicts that the distribution of the first significant digit of random numbers obtained from mixed probability distributions is surprisingly meaningful and reveals some universal behavior. We generalize this finding to examine the joint first-digit probability of a pair of two random numbers and show that undetectable correlations by means of the usual covariance-based measure can be identified in the statistics of the corresponding first digits. We illustrate this new measure by analyzing the correlations and anticorrelations of the positions of two interacting particles in their quantum mechanical ground state. This suggests that by using this measure, the presence or absence of correlations can be determined even if only the first digit of noisy experimental data can be measured accurately.
NASA Astrophysics Data System (ADS)
Langan, Liam; Scheiter, Simon; Higgins, Steven
2017-04-01
It remains poorly understood why the position of the forest-savanna biome boundary, in a domain defined by precipitation and temperature, differs in South America, Africa and Australia. Process based Dynamic Global Vegetation Models (DGVMs) are a valuable tool to investigate the determinants of vegetation distributions, however, many DGVMs fail to predict the spatial distribution or indeed presence of the South American savanna biome. Evidence suggests fire plays a significant role in mediating forest-savanna biome boundaries, however, fire alone appear to be insufficient to predict these boundaries in South America. We hypothesize that interactions between precipitation, constraints on tree rooting depth and fire, affect the probability of savanna occurrence and the position of the savanna-forest boundary. We tested our hypotheses at tropical forest and savanna sites in Brazil and Venezuela using a novel DGVM, aDGVM2, which allows plant trait spectra, constrained by trade-offs between traits, to evolve in response to abiotic and biotic conditions. Plant hydraulics is represented by the cohesion-tension theory, this allowed us to explore how soil and plant hydraulics control biome distributions and plant traits. The resulting community trait distributions are emergent properties of model dynamics. We showed that across much of South America the biome state is not determined by climate alone. Interactions between tree rooting depth, fire and precipitation affected the probability of observing a given biome state and the emergent traits of plant communities. Simulations where plant rooting depth varied in space provided the best match to satellite derived biomass estimates and generated biome distributions that reproduced contemporary biome maps well. Future projections showed that biomass distributions, biome distributions and plant trait spectra will change, however, the magnitude of these changes are highly dependent on the applied atmospheric forcings.
On the properties of stochastic intermittency in rainfall processes.
Molini, A; La, Barbera P; Lanza, L G
2002-01-01
In this work we propose a mixed approach to deal with the modelling of rainfall events, based on the analysis of geometrical and statistical properties of rain intermittency in time, combined with the predictability power derived from the analysis of no-rain periods distribution and from the binary decomposition of the rain signal. Some recent hypotheses on the nature of rain intermittency are reviewed too. In particular, the internal intermittent structure of a high resolution pluviometric time series covering one decade and recorded at the tipping bucket station of the University of Genova is analysed, by separating the internal intermittency of rainfall events from the inter-arrival process through a simple geometrical filtering procedure. In this way it is possible to associate no-rain intervals with a probability distribution both in virtue of their position within the event and their percentage. From this analysis, an invariant probability distribution for the no-rain periods within the events is obtained at different aggregation levels and its satisfactory agreement with a typical extreme value distribution is shown.
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Does probability of occurrence relate to population dynamics?
Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.
2014-01-01
Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.
THE SEMIGROUP OF METRIC MEASURE SPACES AND ITS INFINITELY DIVISIBLE PROBABILITY MEASURES
EVANS, STEVEN N.; MOLCHANOV, ILYA
2015-01-01
A metric measure space is a complete, separable metric space equipped with a probability measure that has full support. Two such spaces are equivalent if they are isometric as metric spaces via an isometry that maps the probability measure on the first space to the probability measure on the second. The resulting set of equivalence classes can be metrized with the Gromov–Prohorov metric of Greven, Pfaffelhuber and Winter. We consider the natural binary operation ⊞ on this space that takes two metric measure spaces and forms their Cartesian product equipped with the sum of the two metrics and the product of the two probability measures. We show that the metric measure spaces equipped with this operation form a cancellative, commutative, Polish semigroup with a translation invariant metric. There is an explicit family of continuous semicharacters that is extremely useful for, inter alia, establishing that there are no infinitely divisible elements and that each element has a unique factorization into prime elements. We investigate the interaction between the semigroup structure and the natural action of the positive real numbers on this space that arises from scaling the metric. For example, we show that for any given positive real numbers a, b, c the trivial space is the only space that satisfies a ⊞ b = c . We establish that there is no analogue of the law of large numbers: if X1, X2, … is an identically distributed independent sequence of random spaces, then no subsequence of 1n⊞k=1nXk converges in distribution unless each Xk is almost surely equal to the trivial space. We characterize the infinitely divisible probability measures and the Lévy processes on this semigroup, characterize the stable probability measures and establish a counterpart of the LePage representation for the latter class. PMID:28065980
Suzuki, Teppei; Tani, Yuji; Ogasawara, Katsuhiko
2016-07-25
Consistent with the "attention, interest, desire, memory, action" (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. In the case of the keyword "clinic name," the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword "clinic name and regional name," the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword "clinic name + medical examination," the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords "mammography screening," the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis.
Tani, Yuji
2016-01-01
Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword “clinic name and regional name,” the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword “clinic name + medical examination,” the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords “mammography screening,” the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Conclusions Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis. PMID:27457537
Conflict Probability Estimation for Free Flight
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Heinz
1996-01-01
The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.
Role of conviction in nonequilibrium models of opinion formation
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Anteneodo, Celia
2012-12-01
We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).
NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
Optical Field-Strength Polarization of Two-Mode Single-Photon States
ERIC Educational Resources Information Center
Linares, J.; Nistal, M. C.; Barral, D.; Moreno, V.
2010-01-01
We present a quantum analysis of two-mode single-photon states based on the probability distributions of the optical field strength (or position quadrature) in order to describe their quantum polarization characteristics, where polarization is understood as a significative confinement of the optical field-strength values on determined regions of…
Jimsphere wind and turbulence exceedance statistic
NASA Technical Reports Server (NTRS)
Adelfang, S. I.; Court, A.
1972-01-01
Exceedance statistics of winds and gusts observed over Cape Kennedy with Jimsphere balloon sensors are described. Gust profiles containing positive and negative departures, from smoothed profiles, in the wavelength ranges 100-2500, 100-1900, 100-860, and 100-460 meters were computed from 1578 profiles with four 41 weight digital high pass filters. Extreme values of the square root of gust speed are normally distributed. Monthly and annual exceedance probability distributions of normalized rms gust speeds in three altitude bands (2-7, 6-11, and 9-14 km) are log-normal. The rms gust speeds are largest in the 100-2500 wavelength band between 9 and 14 km in late winter and early spring. A study of monthly and annual exceedance probabilities and the number of occurrences per kilometer of level crossings with positive slope indicates significant variability with season, altitude, and filter configuration. A decile sampling scheme is tested and an optimum approach is suggested for drawing a relatively small random sample that represents the characteristic extreme wind speeds and shears of a large parent population of Jimsphere wind profiles.
Soft inclusion in a confined fluctuating active gel
NASA Astrophysics Data System (ADS)
Singh Vishen, Amit; Rupprecht, J.-F.; Shivashankar, G. V.; Prost, J.; Rao, Madan
2018-03-01
We study stochastic dynamics of a point and extended inclusion within a one-dimensional confined active viscoelastic gel. We show that the dynamics of a point inclusion can be described by a Langevin equation with a confining potential and multiplicative noise. Using a systematic adiabatic elimination over the fast variables, we arrive at an overdamped equation with a proper definition of the multiplicative noise. To highlight various features and to appeal to different biological contexts, we treat the inclusion in turn as a rigid extended element, an elastic element, and a viscoelastic (Kelvin-Voigt) element. The dynamics for the shape and position of the extended inclusion can be described by coupled Langevin equations. Deriving exact expressions for the corresponding steady-state probability distributions, we find that the active noise induces an attraction to the edges of the confining domain. In the presence of a competing centering force, we find that the shape of the probability distribution exhibits a sharp transition upon varying the amplitude of the active noise. Our results could help understanding the positioning and deformability of biological inclusions, e.g., organelles in cells, or nucleus and cells within tissues.
Rogue waves driven by polarization instabilities in a long ring fiber oscillator
NASA Astrophysics Data System (ADS)
Kolpakov, S. A.; Kbashi, Hani; Sergeyev, Sergey
2017-05-01
We present an experimental and theoretical results of a study of a complex nonlinear polarization dynamics in a passively self-mode-locked erbium-doped fiber oscillator implemented in a ring configuration and operating near lasing threshold. The theoretical model consists of seven coupled non-linear equations and takes into account both orthogonal states of polarizations in the fiber. The experiment confirmed the existence of seven eigenfrequencies, predicted by the model due to polarization instability near lasing threshold. By adjusting the state of polarization of the pump and in-cavity birefringence we changed some eigenfrequencies from being different (non-degenerate state) to matching (degenerate state). The non-degenerate states of oscillator lead to the L-shaped probability distribution function and true rogue wave regime with a positive dominant Lyapunov exponent value between 1.4 and 2.6. Small detuning from partially degenerate case also leads to L-shaped probability distribution function with the tail trespassing eight standard deviations threshold, giving periodic patterns of pulses along with positive dominant Lyapunov exponent of a filtered signal between 0.6 and 3.2. The partial degeneration, in turn, guides to quasi-symmetric distribution and the value of dominant Lyapunov exponent of 42 which is a typical value for systems with a source of the strongly nonhomogeneous external noise.
Serfling, Robert; Ogola, Gerald
2016-02-10
Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
van Rooijen, Dominique C; van de Kamer, Jeroen B; Pool, René; Hulshof, Maarten CCM; Koning, Caro CE; Bel, Arjan
2009-01-01
Background The purpose of this study was to determine the dosimetric effect of on-line position correction for bladder tumor irradiation and to find methods to predict and handle this effect. Methods For 25 patients with unifocal bladder cancer intensity modulated radiotherapy (IMRT) with 5 beams was planned. The requirement for each plan was that 99% of the target volume received 95% of the prescribed dose. Tumor displacements from -2.0 cm to 2.0 cm in each dimension were simulated, using 0.5 cm increments, resulting in 729 simulations per patient. We assumed that on-line correction for the tumor was applied perfectly. We determined the correlation between the change in D99% and the change in path length, which is defined here as the distance from the skin to the isocenter for each beam. In addition the margin needed to avoid underdosage was determined and the probability that an underdosage occurs in a real treatment was calculated. Results Adjustments for tumor displacement with perfect on-line position correction resulted in an altered dose distribution. The altered fraction dose to the target varied from 91.9% to 100.4% of the prescribed dose. The mean D99% (± SD) was 95.8% ± 1.0%. There was a modest linear correlation between the difference in D99% and the change in path length of the beams after correction (R2 = 0.590). The median probability that a systematic underdosage occurs in a real treatment was 0.23% (range: 0 - 24.5%). A margin of 2 mm reduced that probability to < 0.001% in all patients. Conclusion On-line position correction does result in an altered target coverage, due to changes in average path length after position correction. An extra margin can be added to prevent underdosage. PMID:19775479
Probabilistic Open Set Recognition
NASA Astrophysics Data System (ADS)
Jain, Lalit Prithviraj
Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.
Enders, Laramy; Hefley, Trevor; Girvin, John; Whitworth, Robert; Smith, Charles
2018-05-11
Several aphid species transmit barley yellow dwarf, a globally destructive disease caused by viruses that infect cereal grain crops. Data from >400 samples collected across Kansas wheat fields in 2014 and 2015 were used to develop spatio-temporal models predicting the extent to which landcover, temperature and precipitation affect spring aphid vector abundance and presence of individuals carrying Barley yellow dwarf virus (BYDV). The distribution of Rhopalosiphum padi abundance was not correlated with climate or landcover, but Sitobion avenae abundance was positively correlated to fall temperature and negatively correlated to spring temperature and precipitation. The abundance of Schizaphis graminum was negatively correlated with fall precipitation and winter temperature. The incidence of viruliferous (+BYDV) R. padi was positively correlated with fall precipitation but negatively correlated with winter precipitation. In contrast, the probability of +BYDV S. avenae was unaffected by precipitation but was positively correlated with average fall temperatures and distance to nearest forest or shrubland. R. padi and S. avenae were more prevalent at Eastern sample sites where ground cover is more grassland than cropland, suggesting that grassland may provide over-summering sites for vectors and pose a risk as potential BYDV reservoirs. Nevertheless, land cover patterns were not strongly associated with differences in abundance or probability that viruliferous aphids were present.
Universal rule for the symmetric division of plant cells
Besson, Sébastien; Dumais, Jacques
2011-01-01
The division of eukaryotic cells involves the assembly of complex cytoskeletal structures to exert the forces required for chromosome segregation and cytokinesis. In plants, empirical evidence suggests that tensional forces within the cytoskeleton cause cells to divide along the plane that minimizes the surface area of the cell plate (Errera’s rule) while creating daughter cells of equal size. However, exceptions to Errera’s rule cast doubt on whether a broadly applicable rule can be formulated for plant cell division. Here, we show that the selection of the plane of division involves a competition between alternative configurations whose geometries represent local area minima. We find that the probability of observing a particular division configuration increases inversely with its relative area according to an exponential probability distribution known as the Gibbs measure. Moreover, a comparison across land plants and their most recent algal ancestors confirms that the probability distribution is widely conserved and independent of cell shape and size. Using a maximum entropy formulation, we show that this empirical division rule is predicted by the dynamics of the tense cytoskeletal elements that lead to the positioning of the preprophase band. Based on the fact that the division plane is selected from the sole interaction of the cytoskeleton with cell shape, we posit that the new rule represents the default mechanism for plant cell division when internal or external cues are absent. PMID:21383128
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-09-09
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.
Spacing distribution functions for 1D point island model with irreversible attachment
NASA Astrophysics Data System (ADS)
Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto
2011-03-01
We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).
XID+: Next generation XID development
NASA Astrophysics Data System (ADS)
Hurley, Peter
2017-04-01
XID+ is a prior-based source extraction tool which carries out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. It uses a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.
Understanding the joint behavior of temperature and precipitation for climate change impact studies
NASA Astrophysics Data System (ADS)
Rana, Arun; Moradkhani, Hamid; Qin, Yueyue
2017-07-01
The multiple downscaled scenario products allow us to assess the uncertainty of the variations of precipitation and temperature in the current and future periods. Probabilistic assessments of both climatic variables help better understand the interdependence of the two and thus, in turn, help in assessing the future with confidence. In the present study, we use ensemble of statistically downscaled precipitation and temperature from various models. The dataset used is multi-model ensemble of 10 global climate models (GCMs) downscaled product from CMIP5 daily dataset using the Bias Correction and Spatial Downscaling (BCSD) technique, generated at Portland State University. The multi-model ensemble of both precipitation and temperature is evaluated for dry and wet periods for 10 sub-basins across Columbia River Basin (CRB). Thereafter, copula is applied to establish the joint distribution of two variables on multi-model ensemble data. The joint distribution is then used to estimate the change in trends of said variables in future, along with estimation of the probabilities of the given change. The joint distribution trends vary, but certainly positive, for dry and wet periods in sub-basins of CRB. Dry season, generally, is indicating a higher positive change in precipitation than temperature (as compared to historical) across sub-basins with wet season inferring otherwise. Probabilities of changes in future, as estimated from the joint distribution, indicate varied degrees and forms during dry season whereas the wet season is rather constant across all the sub-basins.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Asymptotic behavior of the daily increment distribution of the IPC, the mexican stock market index
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-02-01
In this work, a statistical analysis of the distribution of daily fluctuations of the IPC, the Mexican Stock Market Index is presented. A sample of the IPC covering the 13-year period 04/19/1990 - 08/21/2003 was analyzed and the cumulative probability distribution of its daily logarithmic variations studied. Results showed that the cumulative distribution function for extreme variations, can be described by a Pareto-Levy model with shape parameters alpha=3.634 +- 0.272 and alpha=3.540 +- 0.278 for its positive and negative tails respectively. This result is consistent with previous studies, where it has been found that 2.5< alpha <4 for other financial markets worldwide.
Distribution of injected power fluctuations in electroconvection.
Tóth-Katona, Tibor; Gleeson, J T
2003-12-31
We report on the distribution spectra of the fluctations in the amount of power injected into a liquid crystal undergoing electroconvective flow. The probability distribution functions (PDFs) of the fluc-tuations as well as the magnitude of the fluctuations have been determined in a wide range of imposed stress both for unconfined and confined flow geometries. These spectra are compared to those found in other systems held far from equilibrium, and find that in certain conditions we obtain the universal PDF form reported by Phys. Rev. Lett. 84, 3744 (2000)]. Moreover, the PDF approaches this universal form via an interesting mechanism whereby the distribution's negative tail evolves towards form in a different manner than the positive tail.
No-signaling quantum key distribution: solution by linear programming
NASA Astrophysics Data System (ADS)
Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan
2015-02-01
We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.
The effect of kerosene injection on ignition probability of local ignition in a scramjet combustor
NASA Astrophysics Data System (ADS)
Bao, Heng; Zhou, Jin; Pan, Yu
2017-03-01
The spark ignition of kerosene is investigated in a scramjet combustor with a flight condition of Ma 4, 17 km. Based plentiful of experimental data, the ignition probabilities of the local ignition have been acquired for different injection setups. The ignition probability distributions show that the injection pressure and injection location have a distinct effect on spark ignition. The injection pressure has both upper and lower limit for local ignition. Generally, the larger mass flow rate will reduce the ignition probability. The ignition position also affects the ignition near the lower pressure limit. The reason is supposed to be the cavity swallow effect on upstream jet spray near the leading edge, which will make the cavity fuel rich. The corner recirculation zone near the front wall of the cavity plays a significant role in the stabilization of local flame.
Quasi-Bell inequalities from symmetrized products of noncommuting qubit observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamel, Omar E.; Fleming, Graham R.
Noncommuting observables cannot be simultaneously measured; however, under local hidden variable models, they must simultaneously hold premeasurement values, implying the existence of a joint probability distribution. We study the joint distributions of noncommuting observables on qubits, with possible criteria of positivity and the Fréchet bounds limiting the joint probabilities, concluding that the latter may be negative. We use symmetrization, justified heuristically and then more carefully via the Moyal characteristic function, to find the quantum operator corresponding to the product of noncommuting observables. This is then used to construct Quasi-Bell inequalities, Bell inequalities containing products of noncommuting observables, on two qubits.more » These inequalities place limits on the local hidden variable models that define joint probabilities for noncommuting observables. We also found that the Quasi-Bell inequalities have a quantum to classical violation as high as 3/2 on two qubit, higher than conventional Bell inequalities. Our result demonstrates the theoretical importance of noncommutativity in the nonlocality of quantum mechanics and provides an insightful generalization of Bell inequalities.« less
Quasi-Bell inequalities from symmetrized products of noncommuting qubit observables
Gamel, Omar E.; Fleming, Graham R.
2017-05-01
Noncommuting observables cannot be simultaneously measured; however, under local hidden variable models, they must simultaneously hold premeasurement values, implying the existence of a joint probability distribution. We study the joint distributions of noncommuting observables on qubits, with possible criteria of positivity and the Fréchet bounds limiting the joint probabilities, concluding that the latter may be negative. We use symmetrization, justified heuristically and then more carefully via the Moyal characteristic function, to find the quantum operator corresponding to the product of noncommuting observables. This is then used to construct Quasi-Bell inequalities, Bell inequalities containing products of noncommuting observables, on two qubits.more » These inequalities place limits on the local hidden variable models that define joint probabilities for noncommuting observables. We also found that the Quasi-Bell inequalities have a quantum to classical violation as high as 3/2 on two qubit, higher than conventional Bell inequalities. Our result demonstrates the theoretical importance of noncommutativity in the nonlocality of quantum mechanics and provides an insightful generalization of Bell inequalities.« less
NASA Astrophysics Data System (ADS)
Kosov, Daniel S.
2017-09-01
Quantum transport of electrons through a molecule is a series of individual electron tunneling events separated by stochastic waiting time intervals. We study the emergence of temporal correlations between successive waiting times for the electron transport in a vibrating molecular junction. Using the master equation approach, we compute the joint probability distribution for waiting times of two successive tunneling events. We show that the probability distribution is completely reset after each tunneling event if molecular vibrations are thermally equilibrated. If we treat vibrational dynamics exactly without imposing the equilibration constraint, the statistics of electron tunneling events become non-renewal. Non-renewal statistics between two waiting times τ1 and τ2 means that the density matrix of the molecule is not fully renewed after time τ1 and the probability of observing waiting time τ2 for the second electron transfer depends on the previous electron waiting time τ1. The strong electron-vibration coupling is required for the emergence of the non-renewal statistics. We show that in the Franck-Condon blockade regime, extremely rare tunneling events become positively correlated.
Two-mode mazer injected with V-type three-level atoms
NASA Astrophysics Data System (ADS)
Liang, Wen-Qing; Zhang, Zhi-Ming; Xie, Sheng-Wu
2003-12-01
The properties of the two-mode mazer operating on V-type three-level atoms are studied. The effect of the one-atom pumping on the two modes of the cavity field in number-state is asymmetric, that is, the atom emits a photon into one mode with some probability and absorbs a photon from the other mode with some other probability. This effect makes the steady-state photon distribution and the steady-state photon statistics asymmetric for the two modes. The diagram of the probability currents for the photon distribution, given by the analysis of the master equation, reveals that there is no detailed balance solution for the master equation. The computations show that the photon statistics of one mode or both modes can be sub-Poissonian, that the two modes can have anticorrelation or correlation, that the photon statistics increases with the increase of thermal photons and that the resonant position and strength of the photon statistics are influenced by the ratio of the two coupling strengths of the two modes. These properties are also discussed physically.
Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Sternberg, Andrew L.; Kozub, John A.
Two-photon absorption (TPA) testing is employed to analyze the laser-induced latchup sensitive-volume (SV) of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transitions from zero to one as the laser pulse energy increases. This variability is attributed to pulse-to-pulse variability, uncertainty in measurement of the pulse energy, and variation in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated from multiple pulses. The latchup probability data are well-described by a Weibull distribution. The results showmore » that the area between p-n-p-n cell structures is more sensitive than the p+ and n+ source areas, and locations far from the well contacts are more sensitive than those near the contact region. The transition from low probability of latchup to high probability is more abrupt near the source contacts than it is for the surrounding areas.« less
Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area
Wang, Peng; Sternberg, Andrew L.; Kozub, John A.; ...
2017-12-07
Two-photon absorption (TPA) testing is employed to analyze the laser-induced latchup sensitive-volume (SV) of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transitions from zero to one as the laser pulse energy increases. This variability is attributed to pulse-to-pulse variability, uncertainty in measurement of the pulse energy, and variation in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated from multiple pulses. The latchup probability data are well-described by a Weibull distribution. The results showmore » that the area between p-n-p-n cell structures is more sensitive than the p+ and n+ source areas, and locations far from the well contacts are more sensitive than those near the contact region. The transition from low probability of latchup to high probability is more abrupt near the source contacts than it is for the surrounding areas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, E.P.; Johnson, K.I.; Simonen, F.A.
The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less
Bayesian microsaccade detection
Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji
2017-01-01
Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watermann, J.; McNamara, A.G.; Sofko, G.J.
Some 7,700 radio aurora spectra obtained from a six link 50-MHz CW radar network set up on the Canadian prairies were analyzed with respect to the distributions of mean Doppler shift, spectral width and skewness. A comparison with recently published SABRE results obtained at 153 MHz shows substantial differences in the distributions which are probably due to different experimental and geophysical conditions. The spectra are mostly broad with mean Doppler shifts close to zero (type II spectra). The typical groupings of type I and type III spectra are clearly identified. All types appear to be in general much more symmetricmore » than those recorded with SABRE, and the skewness is only weakly dependent on the sign of the mean Doppler shift. Its distribution peaks near zero and shows a weak positive correlation with the type II Doppler shifts while the mostly positive type I Doppler shifts are slightly negatively correlated with the skewness.« less
Shen, Jian; Deng, Degang; Kong, Weijin; Liu, Shijie; Shen, Zicai; Wei, Chaoyang; He, Hongbo; Shao, Jianda; Fan, Zhengxiu
2006-11-01
By introducing the scattering probability of a subsurface defect (SSD) and statistical distribution functions of SSD radius, refractive index, and position, we derive an extended bidirectional reflectance distribution function (BRDF) from the Jones scattering matrix. This function is applicable to the calculation for comparison with measurement of polarized light-scattering resulting from a SSD. A numerical calculation of the extended BRDF for the case of p-polarized incident light was performed by means of the Monte Carlo method. Our numerical results indicate that the extended BRDF strongly depends on the light incidence angle, the light scattering angle, and the out-of-plane azimuth angle. We observe a 180 degrees symmetry with respect to the azimuth angle. We further investigate the influence of the SSD density, the substrate refractive index, and the statistical distributions of the SSD radius and refractive index on the extended BRDF. For transparent substrates, we also find the dependence of the extended BRDF on the SSD positions.
Modelling the spatial distribution of Fasciola hepatica in dairy cattle in Europe.
Ducheyne, Els; Charlier, Johannes; Vercruysse, Jozef; Rinaldi, Laura; Biggeri, Annibale; Demeler, Janina; Brandt, Christina; De Waal, Theo; Selemetas, Nikolaos; Höglund, Johan; Kaba, Jaroslaw; Kowalczyk, Slawomir J; Hendrickx, Guy
2015-03-26
A harmonized sampling approach in combination with spatial modelling is required to update current knowledge of fasciolosis in dairy cattle in Europe. Within the scope of the EU project GLOWORM, samples from 3,359 randomly selected farms in 849 municipalities in Belgium, Germany, Ireland, Poland and Sweden were collected and their infection status assessed using an indirect bulk tank milk (BTM) enzyme-linked immunosorbent assay (ELISA). Dairy farms were considered exposed when the optical density ratio (ODR) exceeded the 0.3 cut-off. Two ensemble-modelling techniques, Random Forests (RF) and Boosted Regression Trees (BRT), were used to obtain the spatial distribution of the probability of exposure to Fasciola hepatica using remotely sensed environmental variables (1-km spatial resolution) and interpolated values from meteorological stations as predictors. The median ODRs amounted to 0.31, 0.12, 0.54, 0.25 and 0.44 for Belgium, Germany, Ireland, Poland and southern Sweden, respectively. Using the 0.3 threshold, 571 municipalities were categorized as positive and 429 as negative. RF was seen as capable of predicting the spatial distribution of exposure with an area under the receiver operation characteristic (ROC) curve (AUC) of 0.83 (0.96 for BRT). Both models identified rainfall and temperature as the most important factors for probability of exposure. Areas of high and low exposure were identified by both models, with BRT better at discriminating between low-probability and high-probability exposure; this model may therefore be more useful in practise. Given a harmonized sampling strategy, it should be possible to generate robust spatial models for fasciolosis in dairy cattle in Europe to be used as input for temporal models and for the detection of deviations in baseline probability. Further research is required for model output in areas outside the eco-climatic range investigated.
Computer Vision Tracking Using Particle Filters for 3D Position Estimation
2014-03-27
the United States Air Force, the Department of Defense, or the United States Government. This material is declared a work of the U.S. Government and is...probability distribution (unless otherwise noted) π proposal distribution ω importance weight i index of normalized weights δ Dirac -delta function x...p(x) and the importance weights, where δ is the Dirac delta function [2, p. 178]. p(x) = N∑ n=1 ωnδ (x − xn) (2.14) ωn ∝ p(x) π(x) (2.15) Applying
NASA Astrophysics Data System (ADS)
Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi
To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.
Garriguet, Didier
2016-04-01
Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.
Multiple-solution problems in a statistics classroom: an example
NASA Astrophysics Data System (ADS)
Chu, Chi Wing; Chan, Kevin L. T.; Chan, Wai-Sum; Kwong, Koon-Shing
2017-11-01
The mathematics education literature shows that encouraging students to develop multiple solutions for given problems has a positive effect on students' understanding and creativity. In this paper, we present an example of multiple-solution problems in statistics involving a set of non-traditional dice. In particular, we consider the exact probability mass distribution for the sum of face values. Four different ways of solving the problem are discussed. The solutions span various basic concepts in different mathematical disciplines (sample space in probability theory, the probability generating function in statistics, integer partition in basic combinatorics and individual risk model in actuarial science) and thus promotes upper undergraduate students' awareness of knowledge connections between their courses. All solutions of the example are implemented using the R statistical software package.
A computational framework to empower probabilistic protein design
Fromer, Menachem; Yanover, Chen
2008-01-01
Motivation: The task of engineering a protein to perform a target biological function is known as protein design. A commonly used paradigm casts this functional design problem as a structural one, assuming a fixed backbone. In probabilistic protein design, positional amino acid probabilities are used to create a random library of sequences to be simultaneously screened for biological activity. Clearly, certain choices of probability distributions will be more successful in yielding functional sequences. However, since the number of sequences is exponential in protein length, computational optimization of the distribution is difficult. Results: In this paper, we develop a computational framework for probabilistic protein design following the structural paradigm. We formulate the distribution of sequences for a structure using the Boltzmann distribution over their free energies. The corresponding probabilistic graphical model is constructed, and we apply belief propagation (BP) to calculate marginal amino acid probabilities. We test this method on a large structural dataset and demonstrate the superiority of BP over previous methods. Nevertheless, since the results obtained by BP are far from optimal, we thoroughly assess the paradigm using high-quality experimental data. We demonstrate that, for small scale sub-problems, BP attains identical results to those produced by exact inference on the paradigmatic model. However, quantitative analysis shows that the distributions predicted significantly differ from the experimental data. These findings, along with the excellent performance we observed using BP on the smaller problems, suggest potential shortcomings of the paradigm. We conclude with a discussion of how it may be improved in the future. Contact: fromer@cs.huji.ac.il PMID:18586717
Conservative Belief and Rationality
2012-10-03
mail: halpern@cs.cornell.edu, rafael@cs.cornell.edu October 3, 2012 Abstract Brandenburger and Dekel have shown that common belief of rationality (CBR...for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Games and Economic Behavior?13 14. ABSTRACT Brandenburger and Dekel have shown...themselves in a position to which they initially assigned probability 0. Tan and Werlang [1988] and Brandenburger and Dekel [1987] show that common
Flow-duration-frequency behaviour of British rivers based on annual minima data
NASA Astrophysics Data System (ADS)
Zaidman, Maxine D.; Keller, Virginie; Young, Andrew R.; Cadman, Daniel
2003-06-01
A comparison of different probability distribution models for describing the flow-duration-frequency behaviour of annual minima flow events in British rivers is reported. Twenty-five catchments were included in the study, each having stable and natural flow records of at least 30 years in length. Time series of annual minima D-day average flows were derived for each record using durations ( D) of 1, 7, 30, 60, 90, and 365 days and used to construct low flow frequency curves. In each case the Gringorten plotting position formula was used to determine probabilities (of non-exceedance). Four distribution types—Generalised Extreme Value (GEV), Generalised Logistic (GL), Pearson Type-3 (PE3) and Generalised Pareto (GP)—were used to model the probability distribution function for each site. L-moments were used to parameterise individual models, whilst goodness-of-fit tests were used to assess their match to the sample data. The study showed that where short durations (i.e. 60 days or less) were considered, high storage catchments tended to be best represented by GL and GEV distribution models whilst low storage catchments were best described by PE3 or GEV models. However, these models produced reasonable results only within a limited range (e.g. models for high storage catchments did not produce sensible estimates of return periods where the prescribed flow was less than 10% of the mean flow). For annual minima series derived using long duration flow averages (e.g. more than 90 days), GP and GEV models were generally more applicable. The study suggests that longer duration minima do not conform to the same distribution types as short durations, and that catchment properties can influence the type of distribution selected.
Dobramysl, U; Holcman, D
2018-02-15
Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.
NASA Astrophysics Data System (ADS)
Ueunten, Kevin K.
With the scheduled 30 September 2015 integration of Unmanned Aerial System (UAS) into the national airspace, the Federal Aviation Administration (FAA) is concerned with UAS capabilities to sense and avoid conflicts. Since the operator is outside the cockpit, the proposed collision awareness plugin (CAPlugin), based on probability and error propagation, conservatively predicts potential conflicts with other aircraft and airspaces, thus increasing the operator's situational awareness. The conflict predictions are calculated using a forward state estimator (FSE) and a conflict calculator. Predicting an aircraft's position, modeled as a mixed Gaussian distribution, is the FSE's responsibility. Furthermore, the FSE supports aircraft engaged in the following three flight modes: free flight, flight path following and orbits. The conflict calculator uses the FSE result to calculate the conflict probability between an aircraft and airspace or another aircraft. Finally, the CAPlugin determines the highest conflict probability and warns the operator. In addition to discussing the FSE free flight, FSE orbit and the airspace conflict calculator, this thesis describes how each algorithm is implemented and tested. Lastly two simulations demonstrates the CAPlugin's capabilities.
Agoritsas, Thomas; Courvoisier, Delphine S; Combescure, Christophe; Deom, Marie; Perneger, Thomas V
2011-04-01
The probability of a disease following a diagnostic test depends on the sensitivity and specificity of the test, but also on the prevalence of the disease in the population of interest (or pre-test probability). How physicians use this information is not well known. To assess whether physicians correctly estimate post-test probability according to various levels of prevalence and explore this skill across respondent groups. Randomized trial. Population-based sample of 1,361 physicians of all clinical specialties. We described a scenario of a highly accurate screening test (sensitivity 99% and specificity 99%) in which we randomly manipulated the prevalence of the disease (1%, 2%, 10%, 25%, 95%, or no information). We asked physicians to estimate the probability of disease following a positive test (categorized as <60%, 60-79%, 80-94%, 95-99.9%, and >99.9%). Each answer was correct for a different version of the scenario, and no answer was possible in the "no information" scenario. We estimated the proportion of physicians proficient in assessing post-test probability as the proportion of correct answers beyond the distribution of answers attributable to guessing. Most respondents in each of the six groups (67%-82%) selected a post-test probability of 95-99.9%, regardless of the prevalence of disease and even when no information on prevalence was provided. This answer was correct only for a prevalence of 25%. We estimated that 9.1% (95% CI 6.0-14.0) of respondents knew how to assess correctly the post-test probability. This proportion did not vary with clinical experience or practice setting. Most physicians do not take into account the prevalence of disease when interpreting a positive test result. This may cause unnecessary testing and diagnostic errors.
Augmenting Phase Space Quantization to Introduce Additional Physical Effects
NASA Astrophysics Data System (ADS)
Robbins, Matthew P. G.
Quantum mechanics can be done using classical phase space functions and a star product. The state of the system is described by a quasi-probability distribution. A classical system can be quantized in phase space in different ways with different quasi-probability distributions and star products. A transition differential operator relates different phase space quantizations. The objective of this thesis is to introduce additional physical effects into the process of quantization by using the transition operator. As prototypical examples, we first look at the coarse-graining of the Wigner function and the damped simple harmonic oscillator. By generalizing the transition operator and star product to also be functions of the position and momentum, we show that additional physical features beyond damping and coarse-graining can be introduced into a quantum system, including the generalized uncertainty principle of quantum gravity phenomenology, driving forces, and decoherence.
A methodology for stochastic analysis of share prices as Markov chains with finite states.
Mettle, Felix Okoe; Quaye, Enoch Nii Boi; Laryea, Ravenhill Adjetey
2014-01-01
Price volatilities make stock investments risky, leaving investors in critical position when uncertain decision is made. To improve investor evaluation confidence on exchange markets, while not using time series methodology, we specify equity price change as a stochastic process assumed to possess Markov dependency with respective state transition probabilities matrices following the identified state pace (i.e. decrease, stable or increase). We established that identified states communicate, and that the chains are aperiodic and ergodic thus possessing limiting distributions. We developed a methodology for determining expected mean return time for stock price increases and also establish criteria for improving investment decision based on highest transition probabilities, lowest mean return time and highest limiting distributions. We further developed an R algorithm for running the methodology introduced. The established methodology is applied to selected equities from Ghana Stock Exchange weekly trading data.
Xiang, T X
1993-01-01
A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
Prediction of future asset prices
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei
2014-12-01
This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.
Statistical Significance of Periodicity and Log-Periodicity with Heavy-Tailed Correlated Noise
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
We estimate the probability that random noise, of several plausible standard distributions, creates a false alarm that a periodicity (or log-periodicity) is found in a time series. The solution of this problem is already known for independent Gaussian distributed noise. We investigate more general situations with non-Gaussian correlated noises and present synthetic tests on the detectability and statistical significance of periodic components. A periodic component of a time series is usually detected by some sort of Fourier analysis. Here, we use the Lomb periodogram analysis, which is suitable and outperforms Fourier transforms for unevenly sampled time series. We examine the false-alarm probability of the largest spectral peak of the Lomb periodogram in the presence of power-law distributed noises, of short-range and of long-range fractional-Gaussian noises. Increasing heavy-tailness (respectively correlations describing persistence) tends to decrease (respectively increase) the false-alarm probability of finding a large spurious Lomb peak. Increasing anti-persistence tends to decrease the false-alarm probability. We also study the interplay between heavy-tailness and long-range correlations. In order to fully determine if a Lomb peak signals a genuine rather than a spurious periodicity, one should in principle characterize the Lomb peak height, its width and its relations to other peaks in the complete spectrum. As a step towards this full characterization, we construct the joint-distribution of the frequency position (relative to other peaks) and of the height of the highest peak of the power spectrum. We also provide the distributions of the ratio of the highest Lomb peak to the second highest one. Using the insight obtained by the present statistical study, we re-examine previously reported claims of ``log-periodicity'' and find that the credibility for log-periodicity in 2D-freely decaying turbulence is weakened while it is strengthened for fracture, for the ion-signature prior to the Kobe earthquake and for financial markets.
Spatial distribution of nuclei in progressive nucleation: Modeling and application
NASA Astrophysics Data System (ADS)
Tomellini, Massimo
2018-04-01
Phase transformations ruled by non-simultaneous nucleation and growth do not lead to random distribution of nuclei. Since nucleation is only allowed in the untransformed portion of space, positions of nuclei are correlated. In this article an analytical approach is presented for computing pair-correlation function of nuclei in progressive nucleation. This quantity is further employed for characterizing the spatial distribution of nuclei through the nearest neighbor distribution function. The modeling is developed for nucleation in 2D space with power growth law and it is applied to describe electrochemical nucleation where correlation effects are significant. Comparison with both computer simulations and experimental data lends support to the model which gives insights into the transition from Poissonian to correlated nearest neighbor probability density.
NASA Astrophysics Data System (ADS)
Pipień, M.
2008-09-01
We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Probabilistic track coverage in cooperative sensor networks.
Ferrari, Silvia; Zhang, Guoxian; Wettergren, Thomas A
2010-12-01
The quality of service of a network performing cooperative track detection is represented by the probability of obtaining multiple elementary detections over time along a target track. Recently, two different lines of research, namely, distributed-search theory and geometric transversals, have been used in the literature for deriving the probability of track detection as a function of random and deterministic sensors' positions, respectively. In this paper, we prove that these two approaches are equivalent under the same problem formulation. Also, we present a new performance function that is derived by extending the geometric-transversal approach to the case of random sensors' positions using Poisson flats. As a result, a unified approach for addressing track detection in both deterministic and probabilistic sensor networks is obtained. The new performance function is validated through numerical simulations and is shown to bring about considerable computational savings for both deterministic and probabilistic sensor networks.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
A brief introduction to probability.
Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio
2018-02-01
The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-01-01
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058
UXO Burial Prediction Fidelity: A Summary
2017-07-01
should not be construed as representing the official position of either the Department of Defense or the sponsoring organization. For More Information ...equilibrium. Any complete picture of munition evolution in sediment would need to account for these effects. More relevant to the present topic: these...of adds uncertainty to predictions of munition fate, and assessments of risk probabilities would need to account for the statistical distribution of
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
The global impact distribution of Near-Earth objects
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.
2016-02-01
Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.
Substrate preferences of epiphytic bromeliads: an experimental approach
NASA Astrophysics Data System (ADS)
Zotz, Gerhard; Vollrath, Birgit
2002-05-01
Based on the known vertical distributions of three epiphyte species we tested the hypothesis that observed interspecific differences are determined at a very early ontogenetic stage. We attached 1296 first-year seedlings of the three species Guzmania monostachya, Tillandsia fasciculata, and Vriesea sanguinolenta (Bromeliaceae) to substrates differing in orientation and relative position within the crown of the host tree, Annona glabra. Surprisingly, we found no evidence for differential mortality on different substrate types for any of the three species. Hence, differences in vertical distribution cannot be explained by interspecific differences in site-specific survival at this stage. This suggests that spatial distribution patterns are determined even earlier, probably resulting from species differences in seed dispersal or during germination.
Exact Extremal Statistics in the Classical 1D Coulomb Gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2017-08-01
We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.
High monetary reward rates and caloric rewards decrease temporal persistence
Bode, Stefan; Murawski, Carsten
2017-01-01
Temporal persistence refers to an individual's capacity to wait for future rewards, while forgoing possible alternatives. This requires a trade-off between the potential value of delayed rewards and opportunity costs, and is relevant to many real-world decisions, such as dieting. Theoretical models have previously suggested that high monetary reward rates, or positive energy balance, may result in decreased temporal persistence. In our study, 50 fasted participants engaged in a temporal persistence task, incentivised with monetary rewards. In alternating blocks of this task, rewards were delivered at delays drawn randomly from distributions with either a lower or higher maximum reward rate. During some blocks participants received either a caloric drink or water. We used survival analysis to estimate participants' probability of quitting conditional on the delay distribution and the consumed liquid. Participants had a higher probability of quitting in blocks with the higher reward rate. Furthermore, participants who consumed the caloric drink had a higher probability of quitting than those who consumed water. Our results support the predictions from the theoretical models, and importantly, suggest that both higher monetary reward rates and physiologically relevant rewards can decrease temporal persistence, which is a crucial determinant for survival in many species. PMID:28228517
High monetary reward rates and caloric rewards decrease temporal persistence.
Fung, Bowen J; Bode, Stefan; Murawski, Carsten
2017-02-22
Temporal persistence refers to an individual's capacity to wait for future rewards, while forgoing possible alternatives. This requires a trade-off between the potential value of delayed rewards and opportunity costs, and is relevant to many real-world decisions, such as dieting. Theoretical models have previously suggested that high monetary reward rates, or positive energy balance, may result in decreased temporal persistence. In our study, 50 fasted participants engaged in a temporal persistence task, incentivised with monetary rewards. In alternating blocks of this task, rewards were delivered at delays drawn randomly from distributions with either a lower or higher maximum reward rate. During some blocks participants received either a caloric drink or water. We used survival analysis to estimate participants' probability of quitting conditional on the delay distribution and the consumed liquid. Participants had a higher probability of quitting in blocks with the higher reward rate. Furthermore, participants who consumed the caloric drink had a higher probability of quitting than those who consumed water. Our results support the predictions from the theoretical models, and importantly, suggest that both higher monetary reward rates and physiologically relevant rewards can decrease temporal persistence, which is a crucial determinant for survival in many species. © 2017 The Authors.
Distributed Optimal Dispatch of Distributed Energy Resources Over Lossy Communication Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Junfeng; Yang, Tao; Wu, Di
In this paper, we consider the economic dispatch problem (EDP), where a cost function that is assumed to be strictly convex is assigned to each of distributed energy resources (DERs), over packet dropping networks. The goal of a standard EDP is to minimize the total generation cost while meeting total demand and satisfying individual generator output limit. We propose a distributed algorithm for solving the EDP over networks. The proposed algorithm is resilient against packet drops over communication links. Under the assumption that the underlying communication network is strongly connected with a positive probability and the packet drops are independentmore » and identically distributed (i.i.d.), we show that the proposed algorithm is able to solve the EDP. Numerical simulation results are used to validate and illustrate the main results of the paper.« less
Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa
2017-01-01
Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio (C/N0) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C/N0 can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis. PMID:29207546
Jin, Tian; Yuan, Heliang; Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa
2017-12-04
Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio ( C / N ₀) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C / N ₀ can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis.
The Experiment of the Clog Reduction in a Plane Silo
NASA Astrophysics Data System (ADS)
Sun, Ai-Le; Zhang, Jie
2017-06-01
The flow of particles may be clogged when they pass through a narrow orifice. Many factors can change the probability of clogging, such as the outlet size, the presence of obstacles and external perturbation, but the detailed mechanisms are still unclear. In this paper, we present an experimental study of reduction of the clogging probability in a horizontal plane silo, which consists of a layer of elastic particles transported on an annular flat plate rotating with a constant angular velocity passing through a hopper structure. We found the exponential distributions of the avalanche size for different sizes of orifice and the power law tails of the passing time between two particles. We did not confirm whether there was a critical size of orifice above which the clogging became impossible. We explored the effect of the obstacle on the probability of clogging: and if we chose a proper obstacle placed at a proper position, the probability of clogging could be reduced by a factor of about seven.
Characterization of the Ionospheric Scintillations at High Latitude using GPS Signal
NASA Astrophysics Data System (ADS)
Mezaoui, H.; Hamza, A. M.; Jayachandran, P. T.
2013-12-01
Transionospheric radio signals experience both amplitude and phase variations as a result of propagation through a turbulent ionosphere; this phenomenon is known as ionospheric scintillations. As a result of these fluctuations, Global Positioning System (GPS) receivers lose track of signals and consequently induce position and navigational errors. Therefore, there is a need to study these scintillations and their causes in order to not only resolve the navigational problem but in addition develop analytical and numerical radio propagation models. In order to quantify and qualify these scintillations, we analyze the probability distribution functions (PDFs) of L1 GPS signals at 50 Hz sampling rate using the Canadian High arctic Ionospheric Network (CHAIN) measurements. The raw GPS signal is detrended using a wavelet-based technique and the detrended amplitude and phase of the signal are used to construct probability distribution functions (PDFs) of the scintillating signal. The resulting PDFs are non-Gaussian. From the PDF functional fits, the moments are estimated. The results reveal a general non-trivial parabolic relationship between the normalized fourth and third moments for both the phase and amplitude of the signal. The calculated higher-order moments of the amplitude and phase distribution functions will help quantify some of the scintillation characteristics and in the process provide a base for forecasting, i.e. develop a scintillation climatology model. This statistical analysis, including power spectra, along with a numerical simulation will constitute the backbone of a high latitude scintillation model.
Nonadditive entropies yield probability distributions with biases not warranted by the data.
Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A
2013-11-01
Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.
Beyond the swab: ecosystem sampling to understand the persistence of an amphibian pathogen.
Mosher, Brittany A; Huyvaert, Kathryn P; Bailey, Larissa L
2018-06-02
Understanding the ecosystem-level persistence of pathogens is essential for predicting and measuring host-pathogen dynamics. However, this process is often masked, in part due to a reliance on host-based pathogen detection methods. The amphibian pathogens Batrachochytrium dendrobatidis (Bd) and B. salamandrivorans (Bsal) are pathogens of global conservation concern. Despite having free-living life stages, little is known about the distribution and persistence of these pathogens outside of their amphibian hosts. We combine historic amphibian monitoring data with contemporary host- and environment-based pathogen detection data to obtain estimates of Bd occurrence independent of amphibian host distributions. We also evaluate differences in filter- and swab-based detection probability and assess inferential differences arising from using different decision criteria used to classify samples as positive or negative. Water filtration-based detection probabilities were lower than those from swabs but were > 10%, and swab-based detection probabilities varied seasonally, declining in the early fall. The decision criterion used to classify samples as positive or negative was important; using a more liberal criterion yielded higher estimates of Bd occurrence than when a conservative criterion was used. Different covariates were important when using the liberal or conservative criterion in modeling Bd detection. We found evidence of long-term Bd persistence for several years after an amphibian host species of conservation concern, the boreal toad (Anaxyrus boreas boreas), was last detected. Our work provides evidence of long-term Bd persistence in the ecosystem, and underscores the importance of environmental samples for understanding and mitigating disease-related threats to amphibian biodiversity.
NASA Astrophysics Data System (ADS)
Gulyaeva, Tamara; Stanislawska, Iwona; Arikan, Feza; Arikan, Orhan
The probability of occurrence of the positive and negative planetary ionosphere storms is evaluated using the W index maps produced from Global Ionospheric Maps of Total Electron Content, GIM-TEC, provided by Jet Propulsion Laboratory, and transformed from geographic coordinates to magnetic coordinates frame. The auroral electrojet AE index and the equatorial disturbance storm time Dst index are investigated as precursors of the global ionosphere storm. The superposed epoch analysis is performed for 77 intense storms (Dst≤-100 nT) and 227 moderate storms (-100
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta
2017-02-15
Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less
Optimal nonlinear filtering using the finite-volume method
NASA Astrophysics Data System (ADS)
Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.
2018-01-01
Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.
Wang, Zhiping; Cao, Dewei; Yu, Benli
2016-05-01
We present a new scheme for three-dimensional (3D) atom localization in a three-level atomic system via measuring the absorption of a weak probe field. Owing to the space-dependent atom-field interaction, the position probability distribution of the atom can be directly determined by measuring the probe absorption. It is found that, by properly varying the parameters of the system, the probability of finding the atom in 3D space can be almost 100%. Our scheme opens a promising way to achieve high-precision and high-efficiency 3D atom localization, which provides some potential applications in laser cooling or atom nano-lithography via atom localization.
Equivalence principle for quantum systems: dephasing and phase shift of free-falling particles
NASA Astrophysics Data System (ADS)
Anastopoulos, C.; Hu, B. L.
2018-02-01
We ask the question of how the (weak) equivalence principle established in classical gravitational physics should be reformulated and interpreted for massive quantum objects that may also have internal degrees of freedom (dof). This inquiry is necessary because even elementary concepts like a classical trajectory are not well defined in quantum physics—trajectories originating from quantum histories become viable entities only under stringent decoherence conditions. From this investigation we posit two logically and operationally distinct statements of the equivalence principle for quantum systems. Version A: the probability distribution of position for a free-falling particle is the same as the probability distribution of a free particle, modulo a mass-independent shift of its mean. Version B: any two particles with the same velocity wave-function behave identically in free fall, irrespective of their masses. Both statements apply to all quantum states, including those without a classical correspondence, and also for composite particles with quantum internal dof. We also investigate the consequences of the interaction between internal and external dof induced by free fall. For a class of initial states, we find dephasing occurs for the translational dof, namely, the suppression of the off-diagonal terms of the density matrix, in the position basis. We also find a gravitational phase shift in the reduced density matrix of the internal dof that does not depend on the particle’s mass. For classical states, the phase shift has a natural classical interpretation in terms of gravitational red-shift and special relativistic time-dilation.
Chang, Edward F; Breshears, Jonathan D; Raygor, Kunal P; Lau, Darryl; Molinaro, Annette M; Berger, Mitchel S
2017-01-01
OBJECTIVE Functional mapping using direct cortical stimulation is the gold standard for the prevention of postoperative morbidity during resective surgery in dominant-hemisphere perisylvian regions. Its role is necessitated by the significant interindividual variability that has been observed for essential language sites. The aim in this study was to determine the statistical probability distribution of eliciting aphasic errors for any given stereotactically based cortical position in a patient cohort and to quantify the variability at each cortical site. METHODS Patients undergoing awake craniotomy for dominant-hemisphere primary brain tumor resection between 1999 and 2014 at the authors' institution were included in this study, which included counting and picture-naming tasks during dense speech mapping via cortical stimulation. Positive and negative stimulation sites were collected using an intraoperative frameless stereotactic neuronavigation system and were converted to Montreal Neurological Institute coordinates. Data were iteratively resampled to create mean and standard deviation probability maps for speech arrest and anomia. Patients were divided into groups with a "classic" or an "atypical" location of speech function, based on the resultant probability maps. Patient and clinical factors were then assessed for their association with an atypical location of speech sites by univariate and multivariate analysis. RESULTS Across 102 patients undergoing speech mapping, the overall probabilities of speech arrest and anomia were 0.51 and 0.33, respectively. Speech arrest was most likely to occur with stimulation of the posterior inferior frontal gyrus (maximum probability from individual bin = 0.025), and variance was highest in the dorsal premotor cortex and the posterior superior temporal gyrus. In contrast, stimulation within the posterior perisylvian cortex resulted in the maximum mean probability of anomia (maximum probability = 0.012), with large variance in the regions surrounding the posterior superior temporal gyrus, including the posterior middle temporal, angular, and supramarginal gyri. Patients with atypical speech localization were far more likely to have tumors in canonical Broca's or Wernicke's areas (OR 7.21, 95% CI 1.67-31.09, p < 0.01) or to have multilobar tumors (OR 12.58, 95% CI 2.22-71.42, p < 0.01), than were patients with classic speech localization. CONCLUSIONS This study provides statistical probability distribution maps for aphasic errors during cortical stimulation mapping in a patient cohort. Thus, the authors provide an expected probability of inducing speech arrest and anomia from specific 10-mm 2 cortical bins in an individual patient. In addition, they highlight key regions of interindividual mapping variability that should be considered preoperatively. They believe these results will aid surgeons in their preoperative planning of eloquent cortex resection.
Manfredi; Feix
2000-10-01
The properties of an alternative definition of quantum entropy, based on Wigner functions, are discussed. Such a definition emerges naturally from the Wigner representation of quantum mechanics, and can easily quantify the amount of entanglement of a quantum state. It is shown that smoothing of the Wigner function induces an increase in entropy. This fact is used to derive some simple rules to construct positive-definite probability distributions which are also admissible Wigner functions.
Effective classification of the prevalence of Schistosoma mansoni.
Mitchell, Shira A; Pagano, Marcello
2012-12-01
To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-00: New Methods to Ensure Target Coverage
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
Incorporating Skew into RMS Surface Roughness Probability Distribution
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
Rostad, Colleen E.; Leenheer, Jerry A.
2004-01-01
Effects of methylation, molar response, multiple charging, solvents, and positive and negative ionization on molecular weight distributions of aquatic fulvic acid were investigated by electrospray ionization/mass spectrometry. After preliminary analysis by positive and negative modes, samples and mixtures of standards were derivatized by methylation to minimize ionization sites and reanalyzed.Positive ionization was less effective and produced more complex spectra than negative ionization. Ionization in methanol/water produced greater response than in acetonitrile/water. Molar response varied widely for the selected free acid standards when analyzed individually and in a mixture, but after methylation this range decreased. After methylation, the number average molecular weight of the Suwannee River fulvic acid remained the same while the weight average molecular weight decreased. These differences are probably indicative of disaggregation of large aggregated ions during methylation. Since the weight average molecular weight decreased, it is likely that aggregate formation in the fulvic acid was present prior to derivatization, rather than multiple charging in the mass spectra.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
Prevalence of Vibrio parahaemolyticus in seafood products from hypermarkets in Shanghai.
Zhang, Zhaohuan; Lou, Yang; Du, Suping; Xiao, LiLi; Niu, Ben; Pan, Yingjie; Zhao, Yong
2017-01-01
Vibrio parahaemolyticus is an important gastroenteritis pathogen contaminating seafood in China. In this study a total of 992 seafood samples from major hypermarkets in Shanghai were monitored for prevalence and burden of V. parahaemolyticus from January 2011 to December 2012. Additionally, appropriate probability distributions for describing V. parahaemolyticus concentrations were assessed based on these surveillance data. Seventeen of 992 samples were positive for V. parahaemolyticus and the geometric mean was 0.1581 most probable number (MPN) g -1 . The variation in prevalence of V. parahaemolyticus was seasonal and the burden of contamination in August (0.1942 MPN g -1 ) was significant (P < 0.01) between 2011 and 2012. Also, the prevalence of V. parahaemolyticus was higher in shellfish and cephalopods than in other seafood (P < 0.05). By comparison, the lognormal distribution and integrated distribution showed no obvious difference for characterizing V. parahaemolyticus contamination. The low prevalence and burden found indicated that seafood from hypermarkets may not be an important risk source for V. parahaemolyticus infection in Shanghai, and more attention should be paid to other areas for selling seafood, such as farmlands or farmers' markets. The simple and effective lognormal distribution is recommended as a better choice for describing V. parahaemolyticus contamination in future risk assessment studies. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Generalized Arcsine Laws for Fractional Brownian Motion
NASA Astrophysics Data System (ADS)
Sadhu, Tridib; Delorme, Mathieu; Wiese, Kay Jörg
2018-01-01
The three arcsine laws for Brownian motion are a cornerstone of extreme-value statistics. For a Brownian Bt starting from the origin, and evolving during time T , one considers the following three observables: (i) the duration t+ the process is positive, (ii) the time tlast the process last visits the origin, and (iii) the time tmax when it achieves its maximum (or minimum). All three observables have the same cumulative probability distribution expressed as an arcsine function, thus the name arcsine laws. We show how these laws change for fractional Brownian motion Xt, a non-Markovian Gaussian process indexed by the Hurst exponent H . It generalizes standard Brownian motion (i.e., H =1/2 ). We obtain the three probabilities using a perturbative expansion in ɛ =H -1/2 . While all three probabilities are different, this distinction can only be made at second order in ɛ . Our results are confirmed to high precision by extensive numerical simulations.
Breakdown of the classical description of a local system.
Kot, Eran; Grønbech-Jensen, Niels; Nielsen, Bo M; Neergaard-Nielsen, Jonas S; Polzik, Eugene S; Sørensen, Anders S
2012-06-08
We provide a straightforward demonstration of a fundamental difference between classical and quantum mechanics for a single local system: namely, the absence of a joint probability distribution of the position x and momentum p. Elaborating on a recently reported criterion by Bednorz and Belzig [Phys. Rev. A 83, 052113 (2011)] we derive a simple criterion that must be fulfilled for any joint probability distribution in classical physics. We demonstrate the violation of this criterion using the homodyne measurement of a single photon state, thus proving a straightforward signature of the breakdown of a classical description of the underlying state. Most importantly, the criterion used does not rely on quantum mechanics and can thus be used to demonstrate nonclassicality of systems not immediately apparent to exhibit quantum behavior. The criterion is directly applicable to any system described by the continuous canonical variables x and p, such as a mechanical or an electrical oscillator and a collective spin of a large ensemble.
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
Probability density functions for use when calculating standardised drought indices
NASA Astrophysics Data System (ADS)
Svensson, Cecilia; Prosdocimi, Ilaria; Hannaford, Jamie
2015-04-01
Time series of drought indices like the standardised precipitation index (SPI) and standardised flow index (SFI) require a statistical probability density function to be fitted to the observed (generally monthly) precipitation and river flow data. Once fitted, the quantiles are transformed to a Normal distribution with mean = 0 and standard deviation = 1. These transformed data are the SPI/SFI, which are widely used in drought studies, including for drought monitoring and early warning applications. Different distributions were fitted to rainfall and river flow data accumulated over 1, 3, 6 and 12 months for 121 catchments in the United Kingdom. These catchments represent a range of catchment characteristics in a mid-latitude climate. Both rainfall and river flow data have a lower bound at 0, as rains and flows cannot be negative. Their empirical distributions also tend to have positive skewness, and therefore the Gamma distribution has often been a natural and suitable choice for describing the data statistically. However, after transformation of the data to Normal distributions to obtain the SPIs and SFIs for the 121 catchments, the distributions are rejected in 11% and 19% of cases, respectively, by the Shapiro-Wilk test. Three-parameter distributions traditionally used in hydrological applications, such as the Pearson type 3 for rainfall and the Generalised Logistic and Generalised Extreme Value distributions for river flow, tend to make the transformed data fit better, with rejection rates of 5% or less. However, none of these three-parameter distributions have a lower bound at zero. This means that the lower tail of the fitted distribution may potentially go below zero, which would result in a lower limit to the calculated SPI and SFI values (as observations can never reach into this lower tail of the theoretical distribution). The Tweedie distribution can overcome the problems found when using either the Gamma or the above three-parameter distributions. The Tweedie is a three-parameter distribution which includes the Gamma distribution as a special case. It is bounded below at zero and has enough flexibility to fit most behaviours observed in the data. It does not always outperform the three-parameter distributions, but the rejection rates are similar. In addition, for certain parameter values the Tweedie distribution has a positive mass at zero, which means that ephemeral streams and months with zero rainfall can be modelled. It holds potential for wider application in drought studies in other climates and types of catchment.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Changes in the distribution of radiocesium in the wood of Japanese cedar trees from 2011 to 2013.
Ogawa, Hideki; Hirano, Yurika; Igei, Shigemitsu; Yokota, Kahori; Arai, Shio; Ito, Hirohisa; Kumata, Atsushi; Yoshida, Hirohisa
2016-09-01
The changes in the distribution of (137)Cs in the wood of Japanese cedar (Cryptomeria japonica) trunks within three years after the Fukushima Dai-ichi Nuclear Power Plant (FDNP) accident in 2011 were investigated. Thirteen trees were felled to collect samples at 6 forests in 2 regions of the Fukushima prefecture. The radial distribution of (137)Cs in the wood was measured at different heights. Profiles of (137)Cs distribution in the wood changed considerably from 2011 to 2013, and the process of (137)Cs distribution change in the wood was clarified. From 2011 to 2012, the active transportation from sapwood to heartwood and the radial diffusion in heartwood proceeded quickly, and the radial (137)Cs distribution differed according to the vertical positon of trees. From 2012 to 2013, the vertical diffusion of (137)Cs from the treetop to the ground, probably caused by the gradient of (137)Cs concentration in the trunk, was observed. Eventually, the radial (137)Cs distributions were nearly identical at any vertical positions in 2013. Our results suggested that the active transportation from sapwood to heartwood and the vertical and radial diffusion in heartwood proceeded according to the vertical position of the tree and (137)Cs distribution in the wood approached the equilibrium state within three years after the accident. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Chase, Robert
1992-01-01
Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Kurt Vedros; Robert Youngblood
This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication ismore » green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values, and three sensitivity cases in which the number of FOTP demands was reduced, along with the Birnbaum importance of the FOTP.« less
Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System
Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens
2017-01-01
Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution. PMID:28287526
Structural Information from Single-molecule FRET Experiments Using the Fast Nano-positioning System.
Dörfler, Thilo; Eilert, Tobias; Röcker, Carlheinz; Nagy, Julia; Michaelis, Jens
2017-02-09
Single-molecule Förster Resonance Energy Transfer (smFRET) can be used to obtain structural information on biomolecular complexes in real-time. Thereby, multiple smFRET measurements are used to localize an unknown dye position inside a protein complex by means of trilateration. In order to obtain quantitative information, the Nano-Positioning System (NPS) uses probabilistic data analysis to combine structural information from X-ray crystallography with single-molecule fluorescence data to calculate not only the most probable position but the complete three-dimensional probability distribution, termed posterior, which indicates the experimental uncertainty. The concept was generalized for the analysis of smFRET networks containing numerous dye molecules. The latest version of NPS, Fast-NPS, features a new algorithm using Bayesian parameter estimation based on Markov Chain Monte Carlo sampling and parallel tempering that allows for the analysis of large smFRET networks in a comparably short time. Moreover, Fast-NPS allows the calculation of the posterior by choosing one of five different models for each dye, that account for the different spatial and orientational behavior exhibited by the dye molecules due to their local environment. Here we present a detailed protocol for obtaining smFRET data and applying the Fast-NPS. We provide detailed instructions for the acquisition of the three input parameters of Fast-NPS: the smFRET values, as well as the quantum yield and anisotropy of the dye molecules. Recently, the NPS has been used to elucidate the architecture of an archaeal open promotor complex. This data is used to demonstrate the influence of the five different dye models on the posterior distribution.
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
A mass reconstruction technique for a heavy resonance decaying to τ + τ -
NASA Astrophysics Data System (ADS)
Xia, Li-Gang
2016-11-01
For a resonance decaying to τ + τ -, it is difficult to reconstruct its mass accurately because of the presence of neutrinos in the decay products of the τ leptons. If the resonance is heavy enough, we show that its mass can be well determined by the momentum component of the τ decay products perpendicular to the velocity of the τ lepton, p ⊥, and the mass of the visible/invisible decay products, m vis/inv, for τ decaying to hadrons/leptons. By sampling all kinematically allowed values of p ⊥ and m vis/inv according to their joint probability distributions determined by the MC simulations, the mass of the mother resonance is assumed to lie at the position with the maximal probability. Since p ⊥ and m vis/inv are invariant under the boost in the τ lepton direction, the joint probability distributions are independent upon the τ’s origin. Thus this technique is able to determine the mass of an unknown resonance with no efficiency loss. It is tested using MC simulations of the physics processes pp → Z/h(125)/h(750) + X → ττ + X at 13 TeV. The ratio of the full width at half maximum and the peak value of the reconstructed mass distribution is found to be 20%-40% using the information of missing transverse energy. Supported by General Financial Grant from the China Postdoctoral Science Foundation (2015M581062)
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1987-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.
The exact probability distribution of the rank product statistics for replicated experiments.
Eisinga, Rob; Breitling, Rainer; Heskes, Tom
2013-03-18
The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Fusion of Imaging and Inertial Sensors for Navigation
2006-09-01
combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
Analysis of Nondeterministic Search Patterns for Minimization of UAV Counter-Targeting
2013-03-01
Defense System NPS Naval Postgraduate School PDF Probability Distribution Function SLS Sea Level Standard UAV Unmanned Aerial Vehicle UAS Unmanned Aerial... intelli - gence regarding a target’s position is obtained, or when contact on a known target is lost. The shape of the AOU is often circular or elliptical...exceed this artificial boundary, ensuring that the searcher will never violate the actual area boundary. This constraint still enables the searcher to
Distribution of G concurrence of random pure states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol
2006-12-15
The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.
Population dynamical behavior of Lotka-Volterra system under regime switching
NASA Astrophysics Data System (ADS)
Li, Xiaoyue; Jiang, Daqing; Mao, Xuerong
2009-10-01
In this paper, we investigate a Lotka-Volterra system under regime switching where B(t) is a standard Brownian motion. The aim here is to find out what happens under regime switching. We first obtain the sufficient conditions for the existence of global positive solutions, stochastic permanence and extinction. We find out that both stochastic permanence and extinction have close relationships with the stationary probability distribution of the Markov chain. The limit of the average in time of the sample path of the solution is then estimated by two constants related to the stationary distribution and the coefficients. Finally, the main results are illustrated by several examples.
Modeling the probability distribution of peak discharge for infiltrating hillslopes
NASA Astrophysics Data System (ADS)
Baiamonte, Giorgio; Singh, Vijay P.
2017-07-01
Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.
Radar sea reflection for low-e targets
NASA Astrophysics Data System (ADS)
Chow, Winston C.; Groves, Gordon W.
1998-09-01
Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.
SETI and SEH (Statistical Equation for Habitables)
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-01-01
The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
A numerical 4D Collision Risk Model
NASA Astrophysics Data System (ADS)
Schmitt, Pal; Culloch, Ross; Lieber, Lilian; Kregting, Louise
2017-04-01
With the growing number of marine renewable energy (MRE) devices being installed across the world, some concern has been raised about the possibility of harming mobile, marine fauna by collision. Although physical contact between a MRE device and an organism has not been reported to date, these novel sub-sea structures pose a challenge for accurately estimating collision risks as part of environmental impact assessments. Even if the animal motion is simplified to linear translation, ignoring likely evasive behaviour, the mathematical problem of establishing an impact probability is not trivial. We present a numerical algorithm to obtain such probability distributions using transient, four-dimensional simulations of a novel marine renewable device concept, Deep Green, Minesto's power plant and hereafter referred to as the 'kite' that flies in a figure-of-eight configuration. Simulations were carried out altering several configurations including kite depth, kite speed and kite trajectory while keeping the speed of the moving object constant. Since the kite assembly is defined as two parts in the model, a tether (attached to the seabed) and the kite, collision risk of each part is reported independently. By comparing the number of collisions with the number of collision-free simulations, a probability of impact for each simulated position in the cross- section of the area is considered. Results suggest that close to the bottom, where the tether amplitude is small, the path is always blocked and the impact probability is 100% as expected. However, higher up in the water column, the collision probability is twice as high in the mid line, where the tether passes twice per period than at the extremes of its trajectory. The collision probability distribution is much more complex in the upper end of the water column, where the kite and tether can simultaneously collide with the object. Results demonstrate the viability of such models, which can also incorporate empirical field data for assessing the probability of collision risk of animals with an MRE device under varying operating conditions.
Snyder, Marcia; Freeman, Mary C.; Purucker, S. Thomas; Pringle, Catherine M.
2016-01-01
Freshwater shrimps are an important biotic component of tropical ecosystems. However, they can have a low probability of detection when abundances are low. We sampled 3 of the most common freshwater shrimp species, Macrobrachium olfersii, Macrobrachium carcinus, and Macrobrachium heterochirus, and used occupancy modeling and logistic regression models to improve our limited knowledge of distribution of these cryptic species by investigating both local- and landscape-scale effects at La Selva Biological Station in Costa Rica. Local-scale factors included substrate type and stream size, and landscape-scale factors included presence or absence of regional groundwater inputs. Capture rates for 2 of the sampled species (M. olfersii and M. carcinus) were sufficient to compare the fit of occupancy models. Occupancy models did not converge for M. heterochirus, but M. heterochirus had high enough occupancy rates that logistic regression could be used to model the relationship between occupancy rates and predictors. The best-supported models for M. olfersii and M. carcinus included conductivity, discharge, and substrate parameters. Stream size was positively correlated with occupancy rates of all 3 species. High stream conductivity, which reflects the quantity of regional groundwater input into the stream, was positively correlated with M. olfersii occupancy rates. Boulder substrates increased occupancy rate of M. carcinus and decreased the detection probability of M. olfersii. Our models suggest that shrimp distribution is driven by factors that function at local (substrate and discharge) and landscape (conductivity) scales.
CUMBIN - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
A monogamy-of-entanglement game with applications to device-independent quantum cryptography
NASA Astrophysics Data System (ADS)
Tomamichel, Marco; Fehr, Serge; Kaniewski, Jędrzej; Wehner, Stephanie
2013-10-01
We consider a game in which two separate laboratories collaborate to prepare a quantum system and are then asked to guess the outcome of a measurement performed by a third party in a random basis on that system. Intuitively, by the uncertainty principle and the monogamy of entanglement, the probability that both players simultaneously succeed in guessing the outcome correctly is bounded. We are interested in the question of how the success probability scales when many such games are performed in parallel. We show that any strategy that maximizes the probability to win every game individually is also optimal for the parallel repetition of the game. Our result implies that the optimal guessing probability can be achieved without the use of entanglement. We explore several applications of this result. Firstly, we show that it implies security for standard BB84 quantum key distribution when the receiving party uses fully untrusted measurement devices, i.e. we show that BB84 is one-sided device independent. Secondly, we show how our result can be used to prove security of a one-round position-verification scheme. Finally, we generalize a well-known uncertainty relation for the guessing probability to quantum side information.
HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps
NASA Astrophysics Data System (ADS)
Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.
2017-01-01
We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.
Negative values of quasidistributions and quantum wave and number statistics
NASA Astrophysics Data System (ADS)
Peřina, J.; Křepelka, J.
2018-04-01
We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
NASA Astrophysics Data System (ADS)
Lozovatsky, I.; Fernando, H. J. S.; Planella-Morato, J.; Liu, Zhiyu; Lee, J.-H.; Jinadasa, S. U. P.
2017-10-01
The probability distribution of turbulent kinetic energy dissipation rate in stratified ocean usually deviates from the classic lognormal distribution that has been formulated for and often observed in unstratified homogeneous layers of atmospheric and oceanic turbulence. Our measurements of vertical profiles of micro-scale shear, collected in the East China Sea, northern Bay of Bengal, to the south and east of Sri Lanka, and in the Gulf Stream region, show that the probability distributions of the dissipation rate ɛ˜r in the pycnoclines (r ˜ 1.4 m is the averaging scale) can be successfully modeled by the Burr (type XII) probability distribution. In weakly stratified boundary layers, lognormal distribution of ɛ˜r is preferable, although the Burr is an acceptable alternative. The skewness Skɛ and the kurtosis Kɛ of the dissipation rate appear to be well correlated in a wide range of Skɛ and Kɛ variability.
The minimum area requirements (MAR) for giant panda: an empirical study
Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang
2016-01-01
Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population’s long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive. PMID:27929520
The minimum area requirements (MAR) for giant panda: an empirical study.
Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang
2016-12-08
Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population's long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km 2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive.
1978-03-01
for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented
Lambregts, Merel M C; Warreman, Eva B; Bernards, Alexandra T; Veelken, Hendrik; von dem Borne, Peter A; Dekkers, Olaf M; Visser, Leo G; de Boer, Mark G
2018-02-01
Blood cultures (BCs) are essential in the evaluation of neutropenic fever. Modern BC systems have significantly reduced the time-to-positivity (TTP) of BC. This study explores the probability of bacteraemia when BCs have remained negative for different periods of time. All adult patients with neutropenia and bacteraemia were included (January 2012-February 2016). Predictive clinical factors for short (≤16 hours) and long (>24 hours) TTP were determined. The residual probability of bacteraemia was estimated for the scenario of negative BC 24 hours after collection. The cohort consisted of 154 patients, accounting for 190 episodes of bacteraemia. Median age of 61 years, 60.5% were male. In 123 (64.7%) episodes, BC yielded a single Gram-positive micro-organism and in 49 (25.8%) a Gram-negative micro-organism (median TTP 16.7, 14.5 hours respectively, P < .01). TTP was ≤24 hours in 91.6% of episodes. Central line-associated bacteraemia was associated with long TTP. The probability of bacteraemia if BC had remained negative for 24 hours was 1%-3%. The expected TTP offers guidance in the management of patients with neutropenia and suspected bacteraemia. The knowledge of negative BC can support a change in working diagnosis, and impact clinical decisions as soon as 24 hours after BC collection. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials
NASA Astrophysics Data System (ADS)
Malik, Pradeep; Swaminathan, A.
2010-11-01
In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
Statistics provide guidance for indigenous organic carbon detection on Mars missions.
Sephton, Mark A; Carter, Jonathan N
2014-08-01
Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.
Method for detecting and avoiding flight hazards
NASA Astrophysics Data System (ADS)
von Viebahn, Harro; Schiefele, Jens
1997-06-01
Today's aircraft equipment comprise several independent warning and hazard avoidance systems like GPWS, TCAS or weather radar. It is the pilot's task to monitor all these systems and take the appropriate action in case of an emerging hazardous situation. The developed method for detecting and avoiding flight hazards combines all potential external threats for an aircraft into a single system. It is based on an aircraft surrounding airspace model consisting of discrete volume elements. For each element of the volume the threat probability is derived or computed from sensor output, databases, or information provided via datalink. The position of the own aircraft is predicted by utilizing a probability distribution. This approach ensures that all potential positions of the aircraft within the near future are considered while weighting the most likely flight path. A conflict detection algorithm initiates an alarm in case the threat probability exceeds a threshold. An escape manoeuvre is generated taking into account all potential hazards in the vicinity, not only the one which caused the alarm. The pilot gets a visual information about the type, the locating, and severeness o the threat. The algorithm was implemented and tested in a flight simulator environment. The current version comprises traffic, terrain and obstacle hazards avoidance functions. Its general formulation allows an easy integration of e.g. weather information or airspace restrictions.
Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick
2012-01-01
Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension
NASA Astrophysics Data System (ADS)
Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek
2018-04-01
We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
A stochastic model for the probability of malaria extinction by mass drug administration.
Pemberton-Ross, Peter; Chitnis, Nakul; Pothin, Emilie; Smith, Thomas A
2017-09-18
Mass drug administration (MDA) has been proposed as an intervention to achieve local extinction of malaria. Although its effect on the reproduction number is short lived, extinction may subsequently occur in a small population due to stochastic fluctuations. This paper examines how the probability of stochastic extinction depends on population size, MDA coverage and the reproduction number under control, R c . A simple compartmental model is developed which is used to compute the probability of extinction using probability generating functions. The expected time to extinction in small populations after MDA for various scenarios in this model is calculated analytically. The results indicate that mass drug administration (Firstly, R c must be sustained at R c < 1.2 to avoid the rapid re-establishment of infections in the population. Secondly, the MDA must produce effective cure rates of >95% to have a non-negligible probability of successful elimination. Stochastic fluctuations only significantly affect the probability of extinction in populations of about 1000 individuals or less. The expected time to extinction via stochastic fluctuation is less than 10 years only in populations less than about 150 individuals. Clustering of secondary infections and of MDA distribution both contribute positively to the potential probability of success, indicating that MDA would most effectively be administered at the household level. There are very limited circumstances in which MDA will lead to local malaria elimination with a substantial probability.
Dealing with non-unique and non-monotonic response in particle sizing instruments
NASA Astrophysics Data System (ADS)
Rosenberg, Phil
2017-04-01
A number of instruments used as de-facto standards for measuring particle size distributions are actually incapable of uniquely determining the size of an individual particle. This is due to non-unique or non-monotonic response functions. Optical particle counters have non monotonic response due to oscillations in the Mie response curves, especially for large aerosol and small cloud droplets. Scanning mobility particle sizers respond identically to two particles where the ratio of particle size to particle charge is approximately the same. Images of two differently sized cloud or precipitation particles taken by an optical array probe can have similar dimensions or shadowed area depending upon where they are in the imaging plane. A number of methods exist to deal with these issues, including assuming that positive and negative errors cancel, smoothing response curves, integrating regions in measurement space before conversion to size space and matrix inversion. Matrix inversion (also called kernel inversion) has the advantage that it determines the size distribution which best matches the observations, given specific information about the instrument (a matrix which specifies the probability that a particle of a given size will be measured in a given instrument size bin). In this way it maximises use of the information in the measurements. However this technique can be confused by poor counting statistics which can cause erroneous results and negative concentrations. Also an effective method for propagating uncertainties is yet to be published or routinely implemented. Her we present a new alternative which overcomes these issues. We use Bayesian methods to determine the probability that a given size distribution is correct given a set of instrument data and then we use Markov Chain Monte Carlo methods to sample this many dimensional probability distribution function to determine the expectation and (co)variances - hence providing a best guess and an uncertainty for the size distribution which includes contributions from the non-unique response curve, counting statistics and can propagate calibration uncertainties.
NASA Astrophysics Data System (ADS)
Benda, L. E.
2009-12-01
Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and sediment storage are described by their probability densities. There are a number of general principles (hypotheses) that stem from this conceptual and numerical framework that may inform the science of erosion and sedimentation in river basins. Rainstorms and other perturbations, characterized by probability distributions of event frequency and magnitude, stochastically drive sediment influx to channel networks. The frequency-magnitude distribution of sediment supply that is typically skewed reflects strong interactions among climate, topography, vegetation, and geotechnical controls that vary between regions; the distribution varies systematically with basin area and the spatial pattern of erosion sources. Probability densities of sediment flux and storage evolve from more to less skewed forms downstream in river networks due to the convolution of the population of sediment sources in a watershed that should vary with climate, network patterns, topography, spatial scale, and degree of erosion asynchrony. The sediment flux and storage distributions are also transformed downstream due to diffusion, storage, interference, and attrition. In stochastic systems, the characteristically pulsed sediment supply and transport can create translational or stationary-diffusive valley and channel depositional landforms, the geometries of which are governed by sediment flux-network interactions. Episodic releases of sediment to the network can also drive a system memory reflected in a Hurst Effect in sediment yields and thus in sedimentological records. Similarly, discreet events of punctuated erosion on hillslopes can lead to altered surface and subsurface properties of a population of erosion source areas that can echo through time and affect subsequent erosion and sediment flux rates. Spatial patterns of probability densities have implications for the frequency and magnitude of sediment transport and storage and thus for the formation of alluvial and colluvial landforms throughout watersheds. For instance, the combination and interference of probability densities of sediment flux at confluences creates patterns of riverine heterogeneity, including standing waves of sediment with associated age distributions of deposits that can vary from younger to older depending on network geometry and position. Although the watershed world of probability densities is rarified and typically confined to research endeavors, it has real world implications for the day-to-day work on hillslopes and in fluvial systems, including measuring erosion, sediment transport, mapping channel morphology and aquatic habitats, interpreting deposit stratigraphy, conducting channel restoration, and applying environmental regulations. A question for the geomorphology community is whether the stochastic framework is useful for advancing our understanding of erosion and sedimentation and whether it should stimulate research to further develop, refine and test these and other principles. For example, a changing climate should lead to shifts in probability densities of erosion, sediment flux, storage, and associated habitats and thus provide a useful index of climate change in earth science forecast models.
ERIC Educational Resources Information Center
Conant, Darcy Lynn
2013-01-01
Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…
Spread of information and infection on finite random networks
NASA Astrophysics Data System (ADS)
Isham, Valerie; Kaczmarska, Joanna; Nekovee, Maziar
2011-04-01
The modeling of epidemic-like processes on random networks has received considerable attention in recent years. While these processes are inherently stochastic, most previous work has been focused on deterministic models that ignore important fluctuations that may persist even in the infinite network size limit. In a previous paper, for a class of epidemic and rumor processes, we derived approximate models for the full probability distribution of the final size of the epidemic, as opposed to only mean values. In this paper we examine via direct simulations the adequacy of the approximate model to describe stochastic epidemics and rumors on several random network topologies: homogeneous networks, Erdös-Rényi (ER) random graphs, Barabasi-Albert scale-free networks, and random geometric graphs. We find that the approximate model is reasonably accurate in predicting the probability of spread. However, the position of the threshold and the conditional mean of the final size for processes near the threshold are not well described by the approximate model even in the case of homogeneous networks. We attribute this failure to the presence of other structural properties beyond degree-degree correlations, and in particular clustering, which are present in any finite network but are not incorporated in the approximate model. In order to test this “hypothesis” we perform additional simulations on a set of ER random graphs where degree-degree correlations and clustering are separately and independently introduced using recently proposed algorithms from the literature. Our results show that even strong degree-degree correlations have only weak effects on the position of the threshold and the conditional mean of the final size. On the other hand, the introduction of clustering greatly affects both the position of the threshold and the conditional mean. Similar analysis for the Barabasi-Albert scale-free network confirms the significance of clustering on the dynamics of rumor spread. For this network, though, with its highly skewed degree distribution, the addition of positive correlation had a much stronger effect on the final size distribution than was found for the simple random graph.
Analysis of genomic sequences by Chaos Game Representation.
Almeida, J S; Carriço, J A; Maretzek, A; Noble, P A; Fletcher, M
2001-05-01
Chaos Game Representation (CGR) is an iterative mapping technique that processes sequences of units, such as nucleotides in a DNA sequence or amino acids in a protein, in order to find the coordinates for their position in a continuous space. This distribution of positions has two properties: it is unique, and the source sequence can be recovered from the coordinates such that distance between positions measures similarity between the corresponding sequences. The possibility of using the latter property to identify succession schemes have been entirely overlooked in previous studies which raises the possibility that CGR may be upgraded from a mere representation technique to a sequence modeling tool. The distribution of positions in the CGR plane were shown to be a generalization of Markov chain probability tables that accommodates non-integer orders. Therefore, Markov models are particular cases of CGR models rather than the reverse, as currently accepted. In addition, the CGR generalization has both practical (computational efficiency) and fundamental (scale independence) advantages. These results are illustrated by using Escherichia coli K-12 as a test data-set, in particular, the genes thrA, thrB and thrC of the threonine operon.
Worker Personality and Its Association with Spatially Structured Division of Labor
Pamminger, Tobias; Foitzik, Susanne; Kaufmann, Katharina C.; Schützler, Natalie; Menzel, Florian
2014-01-01
Division of labor is a defining characteristic of social insects and fundamental to their ecological success. Many of the numerous tasks essential for the survival of the colony must be performed at a specific location. Consequently, spatial organization is an integral aspect of division of labor. The mechanisms organizing the spatial distribution of workers, separating inside and outside workers without central control, is an essential, but so far neglected aspect of division of labor. In this study, we investigate the behavioral mechanisms governing the spatial distribution of individual workers and its physiological underpinning in the ant Myrmica rubra. By investigating worker personalities we uncover position-associated behavioral syndromes. This context-independent and temporally stable set of correlated behaviors (positive association between movements and attraction towards light) could promote the basic separation between inside (brood tenders) and outside workers (foragers). These position-associated behavior syndromes are coupled with a high probability to perform tasks, located at the defined position, and a characteristic cuticular hydrocarbon profile. We discuss the potentially physiological causes for the observed behavioral syndromes and highlight how the study of animal personalities can provide new insights for the study of division of labor and self-organized processes in general. PMID:24497911
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
Rostad, C.E.; Leenheer, J.A.
2004-01-01
Effects of methylation, molar response, multiple charging, solvents, and positive and negative ionization on molecular weight distributions of aquatic fulvic acid were investigated by electrospray ionization/mass spectrometry. After preliminary analysis by positive and negative modes, samples and mixtures of standards were derivatized by methylation to minimize ionization sites and reanalyzed.Positive ionization was less effective and produced more complex spectra than negative ionization. Ionization in methanol/water produced greater response than in acetonitrile/water. Molar response varied widely for the selected free acid standards when analyzed individually and in a mixture, but after methylation this range decreased. After methylation, the number average molecular weight of the Suwannee River fulvic acid remained the same while the weight average molecular weight decreased. These differences are probably indicative of disaggregation of large aggregated ions during methylation. Since the weight average molecular weight decreased, it is likely that aggregate formation in the fulvic acid was present prior to derivatization, rather than multiple charging in the mass spectra. ?? 2004 Elsevier B.V. All rights reserved.
An IUR evolutionary game model on the patent cooperate of Shandong China
NASA Astrophysics Data System (ADS)
Liu, Mengmeng; Ma, Yinghong; Liu, Zhiyuan; You, Xuemei
2017-06-01
Organizations of industries and university & research institutes cooperate to meet their respective needs based on social contacts, trust and share complementary resources. From the perspective of complex network together with the patent data of Shandong province in China, a novel evolutionary game model on patent cooperation network is presented. Two sides in the game model are industries and universities & research institutes respectively. The cooperation is represented by a connection when a new patent is developed together by the two sides. The optimal strategy of the evolutionary game model is quantified by the average positive cooperation probability p ¯ and the average payoff U ¯ . The feasibility of this game model is simulated on the parameters such as the knowledge spillover, the punishment, the development cost and the distribution coefficient of the benefit. The numerical simulations show that the cooperative behaviors are affected by the variation of parameters. The knowledge spillover displays different behaviors when the punishment is larger than the development cost or less than it. Those results indicate that reasonable punishment would improve the positive cooperation. The appropriate punishment will be useful to enhance the big degree nodes positively cooperate with industries and universities & research institutes. And an equitable plan for the distribution of cooperative profits is half-and-half distribution strategy for the two sides in game.
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2014-01-01
Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Virtual detector theory for strong-field atomic ionization
NASA Astrophysics Data System (ADS)
Wang, Xu; Tian, Justin; Eberly, J. H.
2018-04-01
A virtual detector (VD) is an imaginary device located at a fixed position in space that extracts information from the wave packet passing through it. By recording the particle momentum and the corresponding probability current at each time, the VDs can accumulate and build the differential momentum distribution of the particle, in a way that resembles real experiments. A mathematical proof is given for the equivalence of the differential momentum distribution obtained by the VD method and by Fourier transforming the wave function. In addition to being a tool for reducing the computational load, VDs have also been found useful in interpreting the ultrafast strong-field ionization process, especially the controversial quantum tunneling process.
Work Measurement as a Generalized Quantum Measurement
NASA Astrophysics Data System (ADS)
Roncaglia, Augusto J.; Cerisola, Federico; Paz, Juan Pablo
2014-12-01
We present a new method to measure the work w performed on a driven quantum system and to sample its probability distribution P (w ). The method is based on a simple fact that remained unnoticed until now: Work on a quantum system can be measured by performing a generalized quantum measurement at a single time. Such measurement, which technically speaking is denoted as a positive operator valued measure reduces to an ordinary projective measurement on an enlarged system. This observation not only demystifies work measurement but also suggests a new quantum algorithm to efficiently sample the distribution P (w ). This can be used, in combination with fluctuation theorems, to estimate free energies of quantum states on a quantum computer.
NASA Astrophysics Data System (ADS)
Parolari, A.; Goulden, M.
2017-12-01
A major challenge to interpreting asymmetric changes in ecosystem productivity is the attribution of these changes to external climate forcing or to internal ecophysiological processes that respond to these drivers (e.g., photosynthesis response to drying soil). For example, positive asymmetry in productivity can result from either positive skewness in the distribution of annual rainfall amount or from negative curvature in the productivity response to annual rainfall. To analyze the relative influences of climate and ecosystem dynamics on both positive and negative asymmetry in multi-year ANPP experiments, we use a multi-scale coupled ecosystem water-carbon model to interpret field experimental results that span gradients of rainfall skewness and ANPP response curvature. The model integrates rainfall variability, soil moisture dynamics, and net carbon assimilation from the daily to inter-annual scales. From the underlying physical basis of the model, we compute the joint probability distribution of the minimum and maximum ANPP for an annual ANPP experiment of N years. The distribution is used to estimate the likelihood that either positive or negative asymmetry will be observed in an experiment, given the annual rainfall distribution and the ANPP response curve. We estimate the total asymmetry as the mode of this joint distribution and the relative contribution attributable to rainfall skewness as the mode for a linear ANPP response curve. Applied to data from several long-term ANPP experiments, we find that there is a wide range of observed ANPP asymmetry (positive and negative) and a spectrum of contributions from internal and external factors. We identify the soil water holding capacity relative to the mean rain event depth as a critical ecosystem characteristic that controls the non-linearity of the ANPP response and positive curvature at high rainfall. Further, the seasonal distribution of rainfall is shown to control the presence or absence of negative curvature at low rainfall. Therefore, a combination of rooting depth, soil texture, and climate seasonality contribute to ANPP response curvature and its contribution to overall observed asymmetry.
Probabilistic Reasoning for Robustness in Automated Planning
NASA Technical Reports Server (NTRS)
Schaffer, Steven; Clement, Bradley; Chien, Steve
2007-01-01
A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Work probability distribution for a ferromagnet with long-ranged and short-ranged correlations
NASA Astrophysics Data System (ADS)
Bhattacharjee, J. K.; Kirkpatrick, T. R.; Sengers, J. V.
2018-04-01
Work fluctuations and work probability distributions are fundamentally different in systems with short-ranged versus long-ranged correlations. Specifically, in systems with long-ranged correlations the work distribution is extraordinarily broad compared to systems with short-ranged correlations. This difference profoundly affects the possible applicability of fluctuation theorems like the Jarzynski fluctuation theorem. The Heisenberg ferromagnet, well below its Curie temperature, is a system with long-ranged correlations in very low magnetic fields due to the presence of Goldstone modes. As the magnetic field is increased the correlations gradually become short ranged. Hence, such a ferromagnet is an ideal system for elucidating the changes of the work probability distribution as one goes from a domain with long-ranged correlations to a domain with short-ranged correlations by tuning the magnetic field. A quantitative analysis of this crossover behavior of the work probability distribution and the associated fluctuations is presented.
Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken
2013-11-01
Elicitation is a technique that can be used to obtain probability distribution from experts about unknown quantities. We conducted a methodology review of reports where probability distributions had been elicited from experts to be used in model-based health technology assessments. Databases including MEDLINE, EMBASE and the CRD database were searched from inception to April 2013. Reference lists were checked and citation mapping was also used. Studies describing their approach to the elicitation of probability distributions were included. Data was abstracted on pre-defined aspects of the elicitation technique. Reports were critically appraised on their consideration of the validity, reliability and feasibility of the elicitation exercise. Fourteen articles were included. Across these studies, the most marked features were heterogeneity in elicitation approach and failure to report key aspects of the elicitation method. The most frequently used approaches to elicitation were the histogram technique and the bisection method. Only three papers explicitly considered the validity, reliability and feasibility of the elicitation exercises. Judged by the studies identified in the review, reports of expert elicitation are insufficient in detail and this impacts on the perceived usability of expert-elicited probability distributions. In this context, the wider credibility of elicitation will only be improved by better reporting and greater standardisation of approach. Until then, the advantage of eliciting probability distributions from experts may be lost.
Generalized Arcsine Laws for Fractional Brownian Motion.
Sadhu, Tridib; Delorme, Mathieu; Wiese, Kay Jörg
2018-01-26
The three arcsine laws for Brownian motion are a cornerstone of extreme-value statistics. For a Brownian B_{t} starting from the origin, and evolving during time T, one considers the following three observables: (i) the duration t_{+} the process is positive, (ii) the time t_{last} the process last visits the origin, and (iii) the time t_{max} when it achieves its maximum (or minimum). All three observables have the same cumulative probability distribution expressed as an arcsine function, thus the name arcsine laws. We show how these laws change for fractional Brownian motion X_{t}, a non-Markovian Gaussian process indexed by the Hurst exponent H. It generalizes standard Brownian motion (i.e., H=1/2). We obtain the three probabilities using a perturbative expansion in ϵ=H-1/2. While all three probabilities are different, this distinction can only be made at second order in ϵ. Our results are confirmed to high precision by extensive numerical simulations.
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225
Rixen, M.; Ferreira-Coelho, E.; Signell, R.
2008-01-01
Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).
NASA Astrophysics Data System (ADS)
Yoshida, Sota; Utsuno, Yutaka; Shimizu, Noritaka; Otsuka, Takaharu
2018-05-01
We perform large-scale shell-model calculations of β -decay properties for neutron-rich nuclei with 13 ≤Z ≤18 and 22 ≤N ≤34 , taking the first-forbidden transitions into account. The natural-parity and unnatural-parity states are calculated in the 0 ℏ ω and 1 ℏ ω model spaces, respectively, within the full s d +p f +s d g valence shell. The calculated β -decay half-lives and β -delayed neutron emission probabilities show good agreement with the experimental data. The first-forbidden transitions make a non-negligible contribution to the half-lives of N ≳28 nuclei. The low-lying Gamow-Teller strengths of even-even nuclei are considerably larger than those of the neighboring odd-A and odd-odd nuclei, strongly affecting the half-lives and neutron emission probabilities. It is shown that this even-odd effect is caused by the Jπ=1+ proton-neutron pairing interaction. We derive a formula to represent the positions of the Gamow-Teller giant resonances from the calculated strength distributions.
Automated segmentation of linear time-frequency representations of marine-mammal sounds.
Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I
2013-09-01
Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.
1991-01-01
The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Estimation in a discrete tail rate family of recapture sampling models
NASA Technical Reports Server (NTRS)
Gupta, Rajan; Lee, Larry D.
1990-01-01
In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.
1981-02-01
monotonic increasing function of true ability or performance score. A cumulative probability function is * then very convenient for describiny; one’s...possible outcomes such as test scores, grade-point averages or other common outcome variables. Utility is usually a monotonic increasing function of true ...r(0) is negative for 8 <i and positive for 0 > M, U(o) is risk-prone for low 0 values and risk-averse for high 0 values. This property is true for
J-Plus: Morphological Classification Of Compact And Extended Sources By Pdf Analysis
NASA Astrophysics Data System (ADS)
López-Sanjuan, C.; Vázquez-Ramió, H.; Varela, J.; Spinoso, D.; Cristóbal-Hornillos, D.; Viironen, K.; Muniesa, D.; J-PLUS Collaboration
2017-10-01
We present a morphological classification of J-PLUS EDR sources into compact (i.e. stars) and extended (i.e. galaxies). Such classification is based on the Bayesian modelling of the concentration distribution, including observational errors and magnitude + sky position priors. We provide the star / galaxy probability of each source computed from the gri images. The comparison with the SDSS number counts support our classification up to r 21. The 31.7 deg² analised comprises 150k stars and 101k galaxies.
An evaluation of procedures to estimate monthly precipitation probabilities
NASA Astrophysics Data System (ADS)
Legates, David R.
1991-01-01
Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.
Yeh, Jun-Jun; Neoh, Choo-Aun; Chen, Cheng-Ren; Chou, Christine Yi-Ting; Wu, Ming-Ting
2014-01-01
This study evaluated the use of high-resolution computed tomography (HRCT) to predict the presence of culture-positive pulmonary tuberculosis (PTB) in adult patients with pulmonary lesions in the emergency department (ED). The study included a derivation phase and validation phase with a total of 8,245 patients with pulmonary disease. There were 132 patients with culture-positive PTB in the derivation phase and 147 patients with culture-positive PTB in the validation phase. Imaging evaluation of pulmonary lesions included morphology and segmental distribution. The post-test probability ratios between both phases in three prevalence areas were analyzed. In the derivation phase, a multivariate analysis model identified cavitation, consolidation, and clusters/nodules in right or left upper lobe (except anterior segment) and consolidation of the superior segment of the right or left lower lobe as independent positive factors for culture-positive PTB, while consolidation of the right or left lower lobe (except superior segment) were independent negative factors. An ideal cutoff point based on the receiver operating characteristic (ROC) curve analysis was obtained at a score of 1. The sensitivity, specificity, positivity predictive value, and negative predictive value from derivation phase were 98.5% (130/132), 99.7% (3997/4008), 92.2% (130/141), and 99.9% (3997/3999). Based on the predicted positive likelihood ratio value of 328.33 in derivation phase, the post-test probability was observed to be 91.5% in the derivation phase, 92.5% in the validation phase, 94.5% in a high TB prevalence area, 91.0% in a moderate prevalence area, and 76.8% in moderate-to-low prevalence area. Our model using HRCT, which is feasible to perform in the ED, can promptly diagnose culture-positive PTB in moderate and moderate-to-low prevalence areas.
Bayes to the Rescue: Continuous Positive Airway Pressure Has Less Mortality Than High-Flow Oxygen.
Modesto I Alapont, Vicent; Khemani, Robinder G; Medina, Alberto; Del Villar Guerra, Pablo; Molina Cambra, Alfred
2017-02-01
The merits of high-flow nasal cannula oxygen versus bubble continuous positive airway pressure are debated in children with pneumonia, with suggestions that randomized controlled trials are needed. In light of a previous randomized controlled trial showing a trend for lower mortality with bubble continuous positive airway pressure, we sought to determine the probability that a new randomized controlled trial would find high-flow nasal cannula oxygen superior to bubble continuous positive airway pressure through a "robust" Bayesian analysis. Sample data were extracted from the trial by Chisti et al, and requisite to "robust" Bayesian analysis, we specified three prior distributions to represent clinically meaningful assumptions. These priors (reference, pessimistic, and optimistic) were used to generate three scenarios to represent the range of possible hypotheses. 1) "Reference": we believe bubble continuous positive airway pressure and high-flow nasal cannula oxygen are equally effective with the same uninformative reference priors; 2) "Sceptic on high-flow nasal cannula oxygen": we believe that bubble continuous positive airway pressure is better than high-flow nasal cannula oxygen (bubble continuous positive airway pressure has an optimistic prior and high-flow nasal cannula oxygen has a pessimistic prior); and 3) "Enthusiastic on high-flow nasal cannula oxygen": we believe that high-flow nasal cannula oxygen is better than bubble continuous positive airway pressure (high-flow nasal cannula oxygen has an optimistic prior and bubble continuous positive airway pressure has a pessimistic prior). Finally, posterior empiric Bayesian distributions were obtained through 100,000 Markov Chain Monte Carlo simulations. In all three scenarios, there was a high probability for more death from high-flow nasal cannula oxygen compared with bubble continuous positive airway pressure (reference, 0.98; sceptic on high-flow nasal cannula oxygen, 0.982; enthusiastic on high-flow nasal cannula oxygen, 0.742). The posterior 95% credible interval on the difference in mortality identified a future randomized controlled trial would be extremely unlikely to find a mortality benefit for high-flow nasal cannula oxygen over bubble continuous positive airway pressure, regardless of the scenario. Interpreting these findings using the "range of practical equivalence" framework would recommend rejecting the hypothesis that high-flow nasal cannula oxygen is superior to bubble continuous positive airway pressure for these children. For children younger than 5 years with pneumonia, high-flow nasal cannula oxygen has higher mortality than bubble continuous positive airway pressure. A future randomized controlled trial in this population is unlikely to find high-flow nasal cannula oxygen superior to bubble continuous positive airway pressure.
Measurement of droplet size distribution in core region of high-speed spray by micro-probe L2F
NASA Astrophysics Data System (ADS)
Sakaguchi, Daisaku; Le Amida, Oluwo; Ueki, Hironobu; Ishida, Masahiro
2008-03-01
In order to investigate the distribution of droplet sizes in the core region of diesel fuel spray, instantaneous measurement of droplet sizes was conducted by an advanced laser 2-focus velocimeter (L2F). The micro-scale probe of the L2F is made up of two foci and the distance between them is 36 µm. The tested nozzle had a 0.2 mm diameter single-hole. The measurements of injection pressure, needle lift, and crank angle were synchronized with the measurement by the L2F at the position 10 mm downstream from the nozzle exit. It is clearly shown that the droplet near the spray axis is larger than that in the off-axis region under the needle full lift condition and that the spatial distribution of droplet sizes varies temporally. It is found that the probability density distribution of droplet sizes in the spray core region can be fitted to the Nukiyama-Tanasawa distribution in most injection periods.
q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations
NASA Astrophysics Data System (ADS)
Katz, Yuri A.; Tian, Li
2013-10-01
We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1
Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.
1979-01-01
Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
Zhao, Ruiying; Chen, Songchao; Zhou, Yue; Jin, Bin; Li, Yan
2018-01-01
Assessing heavy metal pollution and delineating pollution are the bases for evaluating pollution and determining a cost-effective remediation plan. Most existing studies are based on the spatial distribution of pollutants but ignore related uncertainty. In this study, eight heavy-metal concentrations (Cr, Pb, Cd, Hg, Zn, Cu, Ni, and Zn) were collected at 1040 sampling sites in a coastal industrial city in the Yangtze River Delta, China. The single pollution index (PI) and Nemerow integrated pollution index (NIPI) were calculated for every surface sample (0–20 cm) to assess the degree of heavy metal pollution. Ordinary kriging (OK) was used to map the spatial distribution of heavy metals content and NIPI. Then, we delineated composite heavy metal contamination based on the uncertainty produced by indicator kriging (IK). The results showed that mean values of all PIs and NIPIs were at safe levels. Heavy metals were most accumulated in the central portion of the study area. Based on IK, the spatial probability of composite heavy metal pollution was computed. The probability of composite contamination in the central core urban area was highest. A probability of 0.6 was found as the optimum probability threshold to delineate polluted areas from unpolluted areas for integrative heavy metal contamination. Results of pollution delineation based on uncertainty showed the proportion of false negative error areas was 6.34%, while the proportion of false positive error areas was 0.86%. The accuracy of the classification was 92.80%. This indicated the method we developed is a valuable tool for delineating heavy metal pollution. PMID:29642623
Hu, Bifeng; Zhao, Ruiying; Chen, Songchao; Zhou, Yue; Jin, Bin; Li, Yan; Shi, Zhou
2018-04-10
Assessing heavy metal pollution and delineating pollution are the bases for evaluating pollution and determining a cost-effective remediation plan. Most existing studies are based on the spatial distribution of pollutants but ignore related uncertainty. In this study, eight heavy-metal concentrations (Cr, Pb, Cd, Hg, Zn, Cu, Ni, and Zn) were collected at 1040 sampling sites in a coastal industrial city in the Yangtze River Delta, China. The single pollution index (PI) and Nemerow integrated pollution index (NIPI) were calculated for every surface sample (0-20 cm) to assess the degree of heavy metal pollution. Ordinary kriging (OK) was used to map the spatial distribution of heavy metals content and NIPI. Then, we delineated composite heavy metal contamination based on the uncertainty produced by indicator kriging (IK). The results showed that mean values of all PIs and NIPIs were at safe levels. Heavy metals were most accumulated in the central portion of the study area. Based on IK, the spatial probability of composite heavy metal pollution was computed. The probability of composite contamination in the central core urban area was highest. A probability of 0.6 was found as the optimum probability threshold to delineate polluted areas from unpolluted areas for integrative heavy metal contamination. Results of pollution delineation based on uncertainty showed the proportion of false negative error areas was 6.34%, while the proportion of false positive error areas was 0.86%. The accuracy of the classification was 92.80%. This indicated the method we developed is a valuable tool for delineating heavy metal pollution.
NASA Astrophysics Data System (ADS)
Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad
2017-10-01
The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.
Persistent positive North Atlantic oscillation mode dominated the Medieval Climate Anomaly.
Trouet, Valérie; Esper, Jan; Graham, Nicholas E; Baker, Andy; Scourse, James D; Frank, David C
2009-04-03
The Medieval Climate Anomaly (MCA) was the most recent pre-industrial era warm interval of European climate, yet its driving mechanisms remain uncertain. We present here a 947-year-long multidecadal North Atlantic Oscillation (NAO) reconstruction and find a persistent positive NAO during the MCA. Supplementary reconstructions based on climate model results and proxy data indicate a clear shift to weaker NAO conditions into the Little Ice Age (LIA). Globally distributed proxy data suggest that this NAO shift is one aspect of a global MCA-LIA climate transition that probably was coupled to prevailing La Niña-like conditions amplified by an intensified Atlantic meridional overturning circulation during the MCA.
2009-11-01
is estimated using the Gaussian kernel function: c′(w, i) = N∑ j =1 c(w, j ) exp [−(i− j )2 2σ2 ] (2) where i and j are absolute positions of the...corresponding terms in the document, and N is the length of the document; c(w, j ) is the actual count of term w at position j . The PLM P (·|D, i) needs to...probability of rel- evance well. The distribution of relevance can be approximated as fol- lows: p(i|θrel) = ∑ j δ(Qj , i)∑ i ∑ j δ(Qj , i) (10
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Work probability distribution and tossing a biased coin
NASA Astrophysics Data System (ADS)
Saha, Arnab; Bhattacharjee, Jayanta K.; Chakraborty, Sagar
2011-01-01
We show that the rare events present in dissipated work that enters Jarzynski equality, when mapped appropriately to the phenomenon of large deviations found in a biased coin toss, are enough to yield a quantitative work probability distribution for the Jarzynski equality. This allows us to propose a recipe for constructing work probability distribution independent of the details of any relevant system. The underlying framework, developed herein, is expected to be of use in modeling other physical phenomena where rare events play an important role.
Hybrid computer technique yields random signal probability distributions
NASA Technical Reports Server (NTRS)
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
England, Marion E; Phipps, Paul; Medlock, Jolyon M; Atkinson, Peter M; Atkinson, Barry; Hewson, Roger; Gale, Paul
2016-06-01
Crimean-Congo haemorrhagic fever virus (CCHFV) is a zoonotic virus transmitted by Hyalomma ticks, the immature stages of which may be carried by migratory birds. In this study, a total of 12 Hyalomma ticks were recovered from five of 228 migratory birds trapped in Spring, 2012 in southern Spain along the East Atlantic flyway. All collected ticks tested negative for CCHFV. While most birds had zero Hyalomma ticks, two individuals had four and five ticks each and the statistical distribution of Hyalomma tick counts per bird is over-dispersed compared to the Poisson distribution, demonstrating the need for intensive sampling studies to avoid underestimating the total number of ticks. Rates of tick exchange on migratory birds during their northwards migration will affect the probability that a Hyalomma tick entering Great Britain is positive for CCHFV. Drawing on published data, evidence is presented that the latitude of a European country affects the probability of entry of Hyalomma ticks on wild birds. Further data on Hyalomma infestation rates and tick exchange rates are required along the East Atlantic flyway to further our understanding of the origin of Hyalomma ticks (i.e., Africa or southern Europe) and hence the probability of entry of CCHFV into GB. © 2016 The Society for Vector Ecology.
Identification of probabilities.
Vitányi, Paul M B; Chater, Nick
2017-02-01
Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results. First, for a broad class of probability distributions characterized by computability restrictions, we specify a learning algorithm that will almost surely identify a probability distribution in the limit given a finite i.i.d. sample of sufficient but unknown length. This is similarly shown to hold for sequences generated by a broad class of Markov chains, subject to computability assumptions. The technical tool is the strong law of large numbers. Second, for a large class of dependent sequences, we specify an algorithm which identifies in the limit a computable measure for which the sequence is typical, in the sense of Martin-Löf (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We analyze the associated predictions in both cases. We also briefly consider special cases, including language learning, and wider theoretical implications for psychology.
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2014 CFR
2014-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2011 CFR
2011-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
How Can Histograms Be Useful for Introducing Continuous Probability Distributions?
ERIC Educational Resources Information Center
Derouet, Charlotte; Parzysz, Bernard
2016-01-01
The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…
ERIC Educational Resources Information Center
Moses, Tim; Oh, Hyeonjoo J.
2009-01-01
Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, R; Tian, L; Ge, H
Purpose: To evaluate the dosimetry of microscopic disease (MD) region of lung cancer in stereotactic body radiation therapy (SBRT). Methods: For simplicity, we assume organ moves along one dimension. The probability distribution function of tumor position was calculated according to the breathing cycle. The dose to the MD region was obtained through accumulating the treatment planning system calculated doses at different positions in a breathing cycle. A phantom experiment was then conducted to validate the calculated results using a motion phantom (The CIRS ‘Dynamic’ Thorax Phantom). The simulated breathing pattern used a cos4(x) curve with an amplitude of 10mm. Amore » 3-D conformal 7-field plan with 6X energy was created and the dose was calculated in the average intensity projection (AIP) simulation CT images. Both films (EBT2) and optically stimulated luminescence (OSL) detectors were inserted in the target of the phantom to measure the dose during radiation delivery (Varian Truebeam) and results were compared to planning dose parameters. Results: The Gamma analysis (3%/3mm) between measured dose using EBT2 film and calculated dose using AIP was 80.5%, indicating substantial dosimetric differences. While the Gamma analysis (3%/3mm) between measured dose using EBT2 and accumulated dose using 4D-CT was 98.9%, indicating the necessity of dose accumulation using 4D-CT. The measured doses using OSL and theoretically calculated doses using probability distribution function at the corresponding position were comparable. Conclusion: Use of static dose calculation in the treatment planning system could substantially underestimate the actually delivered dose in the MD region for a moving target. Funding Supported by NSFC, No.81372436.« less
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Wilks, Daniel S.
1993-10-01
Performance of 8 three-parameter probability distributions for representing annual extreme and partial duration precipitation data at stations in the northeastern and southeastern United States is investigated. Particular attention is paid to fidelity on the right tail, through use of a bootstrap procedure simulating extrapolation on the right tail beyond the data. It is found that the beta-κ distribution best describes the extreme right tail of annual extreme series, and the beta-P distribution is best for the partial duration data. The conventionally employed two-parameter Gumbel distribution is found to substantially underestimate probabilities associated with the larger precipitation amounts for both annual extreme and partial duration data. Fitting the distributions using left-censored data did not result in improved fits to the right tail.
NASA Astrophysics Data System (ADS)
Kaplan, D. A.; Reaver, N.; Hensley, R. T.; Cohen, M. J.
2017-12-01
Hydraulic transport is an important component of nutrient spiraling in streams. Quantifying conservative solute transport is a prerequisite for understanding the cycling and fate of reactive solutes, such as nutrients. Numerous studies have modeled solute transport within streams using the one-dimensional advection, dispersion and storage (ADS) equation calibrated to experimental data from tracer experiments. However, there are limitations to the information about in-stream transient storage that can be derived from calibrated ADS model parameters. Transient storage (TS) in the ADS model is most often modeled as a single process, and calibrated model parameters are "lumped" values that are the best-fit representation of multiple real-world TS processes. In this study, we developed a roving profiling method to assess and predict spatial heterogeneity of in-stream TS. We performed five tracer experiments on three spring-fed rivers in Florida (USA) using Rhodamine WT. During each tracer release, stationary fluorometers were deployed to measure breakthrough curves for multiple reaches within the river. Teams of roving samplers moved along the rivers measuring tracer concentrations at various locations and depths within the reaches. A Bayesian statistical method was used to calibrate the ADS model to the stationary breakthrough curves, resulting in probability distributions for both the advective and TS zone as a function of river distance and time. Rover samples were then assigned a probability of being from either the advective or TS zone by comparing measured concentrations to the probability distributions of concentrations in the ADS advective and TS zones. A regression model was used to predict the probability of any in-stream position being located within the advective versus TS zone based on spatiotemporal predictors (time, river position, depth, and distance from bank) and eco-geomorphological feature (eddies, woody debris, benthic depressions, and aquatic vegetation). Results confirm that TS is spatially variable as a function of spatiotemporal and eco-geomorphological features. A substantial number of samples with nearly equivalent chances of being from the advective or TS zones suggests that the distinction between zones is often poorly defined.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
NASA Astrophysics Data System (ADS)
Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao
2018-06-01
This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.
On the inequivalence of the CH and CHSH inequalities due to finite statistics
NASA Astrophysics Data System (ADS)
Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.
2017-06-01
Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.
Bayesian time series analysis of segments of the Rocky Mountain trumpeter swan population
Wright, Christopher K.; Sojda, Richard S.; Goodman, Daniel
2002-01-01
A Bayesian time series analysis technique, the dynamic linear model, was used to analyze counts of Trumpeter Swans (Cygnus buccinator) summering in Idaho, Montana, and Wyoming from 1931 to 2000. For the Yellowstone National Park segment of white birds (sub-adults and adults combined) the estimated probability of a positive growth rate is 0.01. The estimated probability of achieving the Subcommittee on Rocky Mountain Trumpeter Swans 2002 population goal of 40 white birds for the Yellowstone segment is less than 0.01. Outside of Yellowstone National Park, Wyoming white birds are estimated to have a 0.79 probability of a positive growth rate with a 0.05 probability of achieving the 2002 objective of 120 white birds. In the Centennial Valley in southwest Montana, results indicate a probability of 0.87 that the white bird population is growing at a positive rate with considerable uncertainty. The estimated probability of achieving the 2002 Centennial Valley objective of 160 white birds is 0.14 but under an alternative model falls to 0.04. The estimated probability that the Targhee National Forest segment of white birds has a positive growth rate is 0.03. In Idaho outside of the Targhee National Forest, white birds are estimated to have a 0.97 probability of a positive growth rate with a 0.18 probability of attaining the 2002 goal of 150 white birds.
Confidence as Bayesian Probability: From Neural Origins to Behavior.
Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F
2015-10-07
Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Reliability of windstorm predictions in the ECMWF ensemble prediction system
NASA Astrophysics Data System (ADS)
Becker, Nico; Ulbrich, Uwe
2016-04-01
Windstorms caused by extratropical cyclones are one of the most dangerous natural hazards in the European region. Therefore, reliable predictions of such storm events are needed. Case studies have shown that ensemble prediction systems (EPS) are able to provide useful information about windstorms between two and five days prior to the event. In this work, ensemble predictions with the European Centre for Medium-Range Weather Forecasts (ECMWF) EPS are evaluated in a four year period. Within the 50 ensemble members, which are initialized every 12 hours and are run for 10 days, windstorms are identified and tracked in time and space. By using a clustering approach, different predictions of the same storm are identified in the different ensemble members and compared to reanalysis data. The occurrence probability of the predicted storms is estimated by fitting a bivariate normal distribution to the storm track positions. Our results show, for example, that predicted storm clusters with occurrence probabilities of more than 50% have a matching observed storm in 80% of all cases at a lead time of two days. The predicted occurrence probabilities are reliable up to 3 days lead time. At longer lead times the occurrence probabilities are overestimated by the EPS.
Quantum stochastic walks on networks for decision-making.
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-31
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce's response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process' degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-01-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making. PMID:27030372
Economic and demographic effects on working women in Latin America.
Psaharopoulos, G; Tzannatos, Z
1993-01-01
This analysis of women's work conditions in Latin America includes a description of general trends in female labor force participation in 15 Latin American countries based on census data between 1950 and 1990. Also examined are pay differentials by gender and whether gender alone or individual characteristics of women workers accounted for the sex-wage gap. More extensive treatment is available in the author's other 1992 publications. Trends indicate that marriage and children were important factors determining whether women were in the labor force or not. The probability of being in the labor force was reduced by 50% for married women, and each child reduced the probability by 5%. When marriage and children were controlled for, age had a positive effect on probability of participation. Urban female heads of household had a positive effect on women's labor force participation. The higher a woman's educational qualification, the greater the probability of being in the work force. Earnings increased with increased educational level. An increase of 1 year of schooling for women contributed to an increase in female earnings of 13.1. Investment in education for women has a higher yield for women than for men. Policies that directly or indirectly improve women's employment opportunities, particularly when families are being formed, can have wide distributional effects. Also unresolved was an explanation for why female participation increased during periods of recession and why women are rewarded more for educational effort than men. The suggestion was that public sector employment, which included many women in the labor force, is distorting results.
A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.
2014-12-01
A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.
Xu, Liqiang; Liu, Xiaodong; Nie, Yaguang
2016-05-01
Seabird subfossils were collected on three islands of the Xisha Archipelago, South China Sea. Via elemental analysis, we identified that bird guano was a significant source for heavy metals Cu, Zn, and Hg. Cu and Zn levels in these guano samples are comparable to their levels in wildbird feces, but guano Hg was lower than previously reported. Trophic positions significantly impacted transfer efficiency of heavy metals by seabirds. Despite of a common source, trace elements, as well as stable isotopes (i.e., guano δ(13)C and collagen δ(15)N), showed island-specific characteristics. Bird subfossils on larger island had relatively greater metal concentrations and revealed higher trophic positions. Partition of element and isotope levels among the islands suggested that transfer efficacy of seabirds on different islands was different, and bird species were probably unevenly distributed among the islets. Island area is possibly a driving factor for distributions of seabird species.
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
Measurements of gas hydrate formation probability distributions on a quasi-free water droplet
NASA Astrophysics Data System (ADS)
Maeda, Nobuo
2014-06-01
A High Pressure Automated Lag Time Apparatus (HP-ALTA) can measure gas hydrate formation probability distributions from water in a glass sample cell. In an HP-ALTA gas hydrate formation originates near the edges of the sample cell and gas hydrate films subsequently grow across the water-guest gas interface. It would ideally be desirable to be able to measure gas hydrate formation probability distributions of a single water droplet or mist that is freely levitating in a guest gas, but this is technically challenging. The next best option is to let a water droplet sit on top of a denser, immiscible, inert, and wall-wetting hydrophobic liquid to avoid contact of a water droplet with the solid walls. Here we report the development of a second generation HP-ALTA which can measure gas hydrate formation probability distributions of a water droplet which sits on a perfluorocarbon oil in a container that is coated with 1H,1H,2H,2H-Perfluorodecyltriethoxysilane. It was found that the gas hydrate formation probability distributions of such a quasi-free water droplet were significantly lower than those of water in a glass sample cell.
A stochastic Markov chain model to describe lung cancer growth and metastasis.
Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter
2012-01-01
A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, Javed; Singhal, Akshat; Gadre, Bhooshan
2017-04-01
The discovery and subsequent study of optical counterparts to transient sources is crucial for their complete astrophysical understanding. Various gamma-ray burst (GRB) detectors, and more notably the ground-based gravitational wave detectors, typically have large uncertainties in the sky positions of detected sources. Searching these large sky regions spanning hundreds of square degrees is a formidable challenge for most ground-based optical telescopes, which can usually image less than tens of square degrees of the sky in a single night. We present algorithms for better scheduling of such follow-up observations in order to maximize the probability of imaging the optical counterpart, basedmore » on the all-sky probability distribution of the source position. We incorporate realistic observing constraints such as the diurnal cycle, telescope pointing limitations, available observing time, and the rising/setting of the target at the observatory’s location. We use simulations to demonstrate that our proposed algorithms outperform the default greedy observing schedule used by many observatories. Our algorithms are applicable for follow-up of other transient sources with large positional uncertainties, such as Fermi -detected GRBs, and can easily be adapted for scheduling radio or space-based X-ray follow-up.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cross, E.R.; Hyams, K.C.
1996-07-01
The distribution of Phlebotomus papatasi in Southwest Asia is thought to be highly dependent on temperature and relative humidity. A discriminant analysis model based on weather data and reported vector surveys was developed to predict the seasonal and geographic distribution of P. papatasi in this region. To simulate global warming, temperature values for 115 weather stations were increased by 1 {degrees}C, 3{degrees}C, and 5{degrees}C, and the outcome variable coded as unknown in the model. Probability of occurrence values were then predicted for each location with a weather station. Stations with positive probability of occurrence values for May, June, July, andmore » August were considered locations where two or more life cycles of P. papatasi could occur and which could support endemic transmission of leishmaniasis and sandfly fever. Among 115 weather stations, 71 (62%) would be considered endemic with current temperature conditions; 14 (12%) additional station could become endemic with an increase of 1 {degrees}C; 17 (15%) more than a 3{degrees}C increase; and 12 (10%) more (all but one station) with a t{degrees}C increase. In addition to increased geographic distribution, seasonality of disease transmission could be extended throughout 12 months of the year in 7 (6%) locations with at least a 3{degrees}C rise in temperature and in 29 (25%) locations with a 5{degrees}C rise. 15 refs., 4 figs.« less
Fragment size distribution in viscous bag breakup of a drop
NASA Astrophysics Data System (ADS)
Kulkarni, Varun; Bulusu, Kartik V.; Plesniak, Michael W.; Sojka, Paul E.
2015-11-01
In this study we examine the drop size distribution resulting from the fragmentation of a single drop in the presence of a continuous air jet. Specifically, we study the effect of Weber number, We, and Ohnesorge number, Oh on the disintegration process. The regime of breakup considered is observed between 12 <= We <= 16 for Oh <= 0.1. Experiments are conducted using phase Doppler anemometry. Both the number and volume fragment size probability distributions are plotted. The volume probability distribution revealed a bi-modal behavior with two distinct peaks: one corresponding to the rim fragments and the other to the bag fragments. This behavior was suppressed in the number probability distribution. Additionally, we employ an in-house particle detection code to isolate the rim fragment size distribution from the total probability distributions. Our experiments showed that the bag fragments are smaller in diameter and larger in number, while the rim fragments are larger in diameter and smaller in number. Furthermore, with increasing We for a given Ohwe observe a large number of small-diameter drops and small number of large-diameter drops. On the other hand, with increasing Oh for a fixed We the opposite is seen.
Male breast cancer according to tumor subtype and race: a population-based study.
Chavez-Macgregor, Mariana; Clarke, Christina A; Lichtensztajn, Daphne; Hortobagyi, Gabriel N; Giordano, Sharon H
2013-05-01
Breast cancer occurs rarely in men. To the authors' knowledge, no population-based estimates of the incidence of human epidermal growth factor receptor 2 (HER2)-positive breast cancer or of the distribution of breast cancer subtypes among male breast cancer patients have been published to date. Therefore, the objective of the current study was to explore breast tumor subtype distribution by race/ethnicity among men in the large, ethnically diverse population of California. This study included men who were diagnosed with invasive breast cancer between 2005 and 2009 with known estrogen receptor (ER) and progesterone receptor (PR) (together, hormone receptor [HR]) status and HER2 status reported to the California Cancer Registry. Among the men with HR-positive tumors, survival probabilities between groups were compared using log-rank tests. Six hundred six patients were included. The median age at diagnosis was 68 years. Four hundred ninety-four men (81.5%) had HR-positive tumors (defined as ER-positive and/or PR-positive and HER2-negative). Ninety men (14.9%) had HER2-positive tumors, and 22 (3.6%) had triple receptor-negative (TN) tumors. Among the patients with HR-positive tumors, non-Hispanic black men and Hispanic men were more likely to have PR-negative tumors than non-Hispanic white men. No statistically significant differences in survival were observed according to tumor subtype (P = .08). Differences in survival according to race/ethnicity were observed among all patients (P = .087) and among those with HR-positive tumors (P = .0170), and non-Hispanic black men had poorer outcomes. In this large, representative cohort of men with breast cancer, the distribution of tumor subtypes was different from that reported for women and varied by patient race/ethnicity. Non-Hispanic black men were more likely to have TN tumors and ER-positive/PR-negative tumors than white men. Copyright © 2013 American Cancer Society.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W.
An account is given of the method used to quantify the risks accruing to the use of a radioisotope thermoelectric generator fueled by Pu-238 dioxide aboard the Space Shuttle-launched Ulysses mission. After using a Monte Carlo technique to develop probability distributions for the radiological consequences of a range of accident scenarios throughout the mission, factors affecting those consequences are identified in conjunction with their probability distributions. The functional relationship among all the factors is then established, and probability distributions for all factor effects are combined by means of a Monte Carlo technique.
Singh, Deependra; Pitkäniemi, Janne; Malila, Nea; Anttila, Ahti
2016-09-01
Mammography has been found effective as the primary screening test for breast cancer. We estimated the cumulative probability of false positive screening test results with respect to symptom history reported at screen. A historical prospective cohort study was done using individual screening data from 413,611 women aged 50-69 years with 2,627,256 invitations for mammography screening between 1992 and 2012 in Finland. Symptoms (lump, retraction, and secretion) were reported at 56,805 visits, and 48,873 visits resulted in a false positive mammography result. Generalized linear models were used to estimate the probability of at least one false positive test and true positive at screening visits. The estimates were compared among women with and without symptoms history. The estimated cumulative probabilities were 18 and 6 % for false positive and true positive results, respectively. In women with a history of a lump, the cumulative probabilities of false positive test and true positive were 45 and 16 %, respectively, compared to 17 and 5 % with no reported lump. In women with a history of any given symptom, the cumulative probabilities of false positive test and true positive were 38 and 13 %, respectively. Likewise, women with a history of a 'lump and retraction' had the cumulative false positive probability of 56 %. The study showed higher cumulative risk of false positive tests and more cancers detected in women who reported symptoms compared to women who did not report symptoms at screen. The risk varies substantially, depending on symptom types and characteristics. Information on breast symptoms influences the balance of absolute benefits and harms of screening.
Score distributions of gapped multiple sequence alignments down to the low-probability tail
NASA Astrophysics Data System (ADS)
Fieth, Pascal; Hartmann, Alexander K.
2016-08-01
Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.
Note: Precise radial distribution of charged particles in a magnetic guiding field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backe, H., E-mail: backe@kph.uni-mainz.de
2015-07-15
Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less
The structure of water around the compressibility minimum
L. B. Skinner; Benmore, C. J.; Parise, J.; ...
2014-12-03
Here we present diffraction data that yield the oxygen-oxygen pair distribution function, gOO(r) over the range 254.2–365.9 K. The running O-O coordination number, which represents the integral of the pair distribution function as a function of radial distance, is found to exhibit an isosbestic point at 3.30(5) Å. The probability of finding an oxygen atom surrounding another oxygen at this distance is therefore shown to be independent of temperature and corresponds to an O-O coordination number of 4.3(2). Moreover, the experimental data also show a continuous transition associated with the second peak position in gOO(r) concomitant with the compressibility minimummore » at 319 K.« less
Plasma Diffusion in Self-Consistent Fluctuations
NASA Technical Reports Server (NTRS)
Smets, R.; Belmont, G.; Aunai, N.
2012-01-01
The problem of particle diffusion in position space, as a consequence ofeleclromagnetic fluctuations is addressed. Numerical results obtained with a self-consistent hybrid code are presented, and a method to calculate diffusion coefficient in the direction perpendicular to the mean magnetic field is proposed. The diffusion is estimated for two different types of fluctuations. The first type (resuiting from an agyrotropic in itiai setting)is stationary, wide band white noise, and associated to Gaussian probability distribution function for the magnetic fluctuations. The second type (result ing from a Kelvin-Helmholtz instability) is non-stationary, with a power-law spectrum, and a non-Gaussian probabi lity distribution function. The results of the study allow revisiting the question of loading particles of solar wind origin in the Earth magnetosphere.
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
NASA Astrophysics Data System (ADS)
Kulanin, N. V.
1985-03-01
The time spectrum of variations in seismicity is quite broad. There are seismic seasons, as well as multiannual variations. The range of characteristic times of variation from days to about one year is studied. Seismic activity as a function of the position of the moon relative to the Earth and the direction toward the Sun is studied. The moments of strong earthquakes, over 5.8 on the Richter scale, between 1968 and June 1980 are plotted in time coordinates relating them to the relative positions of the three bodies in the sun-earth-moon system. Methods of mathematical statistics are applied to the points produced, indicating at least 99% probability that the distribution was not random. a periodicity of the earth's seismic state of 413 days is observed.
NASA Astrophysics Data System (ADS)
Ahmadov, G. S.; Kopatch, Yu. N.; Telezhnikov, S. A.; Ahmadov, F. I.; Granja, C.; Garibov, A. A.; Pospisil, S.
2015-07-01
The silicon based pixel detector Timepix is a multi-parameter detector which gives simultaneously information about position, energy and arrival time of a particle hitting the detector. Applying the ΔE-E method with these detectors makes it possible to determine types of detected particles, separating them by charge. Using a thin silicon detector with thickness of 12 μm combined with a Timepix (300 μm), a ΔE-E telescope has been constructed. The telescope provides information about position, energy, time and type of registered particles. The emission probabilities and the energy distributions of ternary particles (He, Li, Be) from 252Cf spontaneous fission source were determined using this telescope. Besides the ternary particles, a few events were collected, which were attributed to the "pseudo" quaternary fission.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Amirataee, Babak; Montaseri, Majid; Rezaie, Hossein
2018-01-15
Droughts are extreme events characterized by temporal duration and spatial large-scale effects. In general, regional droughts are affected by general circulation of the atmosphere (at large-scale) and regional natural factors, including the topography, natural lakes, the position relative to the center and the path of the ocean currents (at small-scale), and they don't cover the exact same effects in a wide area. Therefore, drought Severity-Area-Frequency (S-A-F) curve investigation is an essential task to develop decision making rule for regional drought management. This study developed the copula-based joint probability distribution of drought severity and percent of area under drought across the Lake Urmia basin, Iran. To do this end, one-month Standardized Precipitation Index (SPI) values during the 1971-2013 were applied across 24 rainfall stations in the study area. Then, seven copula functions of various families, including Clayton, Gumbel, Frank, Joe, Galambos, Plackett and Normal copulas, were used to model the joint probability distribution of drought severity and drought area. Using AIC, BIC and RMSE criteria, the Frank copula was selected as the most appropriate copula in order to develop the joint probability distribution of severity-percent of area under drought across the study area. Based on the Frank copula, the drought S-A-F curve for the study area was derived. The results indicated that severe/extreme drought and non-drought (wet) behaviors have affected the majority of study areas (Lake Urmia basin). However, the area covered by the specific semi-drought effects is limited and has been subject to significant variations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rusalova, M N; Kostiunina, M B
2003-01-01
The pattern of inter-hemisphere distribution of EEG amplitude and frequency as a function of the levels of emotional experience and motivation as well as probability of the goal achievement was studied in 20 subjects. An emotional state was evoked by simulating emotionally colored events. A modified test of Prise et al. (1985) was used to evaluate the level of motivation for the simulated event as well as the probability of goal achievement from the lengths of line segments marked by the subjects. Here we studied simulated emotion of joy. The highest correlation coefficients were observed between the awareness and alpha activity in the both hemispheres. The levels of emotional experience and motivation inversely correlated with the delta and theta activity mostly in the left hemisphere. The beta activity correlated with both the emotional and motivation levels.
General formulation of long-range degree correlations in complex networks
NASA Astrophysics Data System (ADS)
Fujiki, Yuka; Takaguchi, Taro; Yakubo, Kousuke
2018-06-01
We provide a general framework for analyzing degree correlations between nodes separated by more than one step (i.e., beyond nearest neighbors) in complex networks. One joint and four conditional probability distributions are introduced to fully describe long-range degree correlations with respect to degrees k and k' of two nodes and shortest path length l between them. We present general relations among these probability distributions and clarify the relevance to nearest-neighbor degree correlations. Unlike nearest-neighbor correlations, some of these probability distributions are meaningful only in finite-size networks. Furthermore, as a baseline to determine the existence of intrinsic long-range degree correlations in a network other than inevitable correlations caused by the finite-size effect, the functional forms of these probability distributions for random networks are analytically evaluated within a mean-field approximation. The utility of our argument is demonstrated by applying it to real-world networks.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
2015-01-01
Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed “for normality” but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs), a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species’ life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm. PMID:25830773
Wang, Jihan; Yang, Kai
2014-07-01
An efficient operating room needs both little underutilised and overutilised time to achieve optimal cost efficiency. The probabilities of underrun and overrun of lists of cases can be estimated by a well defined duration distribution of the lists. To propose a method of predicting the probabilities of underrun and overrun of lists of cases using Type IV Pearson distribution to support case scheduling. Six years of data were collected. The first 5 years of data were used to fit distributions and estimate parameters. The data from the last year were used as testing data to validate the proposed methods. The percentiles of the duration distribution of lists of cases were calculated by Type IV Pearson distribution and t-distribution. Monte Carlo simulation was conducted to verify the accuracy of percentiles defined by the proposed methods. Operating rooms in John D. Dingell VA Medical Center, United States, from January 2005 to December 2011. Differences between the proportion of lists of cases that were completed within the percentiles of the proposed duration distribution of the lists and the corresponding percentiles. Compared with the t-distribution, the proposed new distribution is 8.31% (0.38) more accurate on average and 14.16% (0.19) more accurate in calculating the probabilities at the 10th and 90th percentiles of the distribution, which is a major concern of operating room schedulers. The absolute deviations between the percentiles defined by Type IV Pearson distribution and those from Monte Carlo simulation varied from 0.20 min (0.01) to 0.43 min (0.03). Operating room schedulers can rely on the most recent 10 cases with the same combination of surgeon and procedure(s) for distribution parameter estimation to plan lists of cases. Values are mean (SEM). The proposed Type IV Pearson distribution is more accurate than t-distribution to estimate the probabilities of underrun and overrun of lists of cases. However, as not all the individual case durations followed log-normal distributions, there was some deviation from the true duration distribution of the lists.
Ribosome flow model with positive feedback
Margaliot, Michael; Tuller, Tamir
2013-01-01
Eukaryotic mRNAs usually form a circular structure; thus, ribosomes that terminatae translation at the 3′ end can diffuse with increased probability to the 5′ end of the transcript, initiating another cycle of translation. This phenomenon describes ribosomal flow with positive feedback—an increase in the flow of ribosomes terminating translating the open reading frame increases the ribosomal initiation rate. The aim of this paper is to model and rigorously analyse translation with feedback. We suggest a modified version of the ribosome flow model, called the ribosome flow model with input and output. In this model, the input is the initiation rate and the output is the translation rate. We analyse this model after closing the loop with a positive linear feedback. We show that the closed-loop system admits a unique globally asymptotically stable equilibrium point. From a biophysical point of view, this means that there exists a unique steady state of ribosome distributions along the mRNA, and thus a unique steady-state translation rate. The solution from any initial distribution will converge to this steady state. The steady-state distribution demonstrates a decrease in ribosome density along the coding sequence. For the case of constant elongation rates, we obtain expressions relating the model parameters to the equilibrium point. These results may perhaps be used to re-engineer the biological system in order to obtain a desired translation rate. PMID:23720534
A tool for simulating collision probabilities of animals with marine renewable energy devices.
Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise
2017-01-01
The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Quantum key distribution without the wavefunction
NASA Astrophysics Data System (ADS)
Niestegge, Gerd
A well-known feature of quantum mechanics is the secure exchange of secret bit strings which can then be used as keys to encrypt messages transmitted over any classical communication channel. It is demonstrated that this quantum key distribution allows a much more general and abstract access than commonly thought. The results include some generalizations of the Hilbert space version of quantum key distribution, but are based upon a general nonclassical extension of conditional probability. A special state-independent conditional probability is identified as origin of the superior security of quantum key distribution; this is a purely algebraic property of the quantum logic and represents the transition probability between the outcomes of two consecutive quantum measurements.
Belcher, Wayne R.; Sweetkind, Donald S.; Elliott, Peggy E.
2002-01-01
The use of geologic information such as lithology and rock properties is important to constrain conceptual and numerical hydrogeologic models. This geologic information is difficult to apply explicitly to numerical modeling and analyses because it tends to be qualitative rather than quantitative. This study uses a compilation of hydraulic-conductivity measurements to derive estimates of the probability distributions for several hydrogeologic units within the Death Valley regional ground-water flow system, a geologically and hydrologically complex region underlain by basin-fill sediments, volcanic, intrusive, sedimentary, and metamorphic rocks. Probability distributions of hydraulic conductivity for general rock types have been studied previously; however, this study provides more detailed definition of hydrogeologic units based on lithostratigraphy, lithology, alteration, and fracturing and compares the probability distributions to the aquifer test data. Results suggest that these probability distributions can be used for studies involving, for example, numerical flow modeling, recharge, evapotranspiration, and rainfall runoff. These probability distributions can be used for such studies involving the hydrogeologic units in the region, as well as for similar rock types elsewhere. Within the study area, fracturing appears to have the greatest influence on the hydraulic conductivity of carbonate bedrock hydrogeologic units. Similar to earlier studies, we find that alteration and welding in the Tertiary volcanic rocks greatly influence hydraulic conductivity. As alteration increases, hydraulic conductivity tends to decrease. Increasing degrees of welding appears to increase hydraulic conductivity because welding increases the brittleness of the volcanic rocks, thus increasing the amount of fracturing.
Crowding Effects in Vehicular Traffic
Combinido, Jay Samuel L.; Lim, May T.
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762
Spatial distribution on high-order-harmonic generation of an H2+ molecule in intense laser fields
NASA Astrophysics Data System (ADS)
Zhang, Jun; Ge, Xin-Lei; Wang, Tian; Xu, Tong-Tong; Guo, Jing; Liu, Xue-Shen
2015-07-01
High-order-harmonic generation (HHG) for the H2 + molecule in a 3-fs, 800-nm few-cycle Gaussian laser pulse combined with a static field is investigated by solving the one-dimensional electronic and one-dimensional nuclear time-dependent Schrödinger equation within the non-Born-Oppenheimer approximation. The spatial distribution in HHG is demonstrated and the results present the recombination process of the electron with the two nuclei, respectively. The spatial distribution of the HHG spectra shows that there is little possibility of the recombination of the electron with the nuclei around the origin z =0 a.u. and equilibrium internuclear positions z =±1.3 a.u. This characteristic is irrelevant to laser parameters and is only attributed to the molecular structure. Furthermore, we investigate the time-dependent electron-nuclear wave packet and ionization probability to further explain the underlying physical mechanism.
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Does Litter Size Variation Affect Models of Terrestrial Carnivore Extinction Risk and Management?
Devenish-Nelson, Eleanor S.; Stephens, Philip A.; Harris, Stephen; Soulsbury, Carl; Richards, Shane A.
2013-01-01
Background Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. Methodology/Principal Findings We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species – the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. Conclusion/Significance These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes. PMID:23469140
Does litter size variation affect models of terrestrial carnivore extinction risk and management?
Devenish-Nelson, Eleanor S; Stephens, Philip A; Harris, Stephen; Soulsbury, Carl; Richards, Shane A
2013-01-01
Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species - the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes.
Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Meegan, Charles A.
1997-01-01
This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
The energetic ion signature of an O-type neutral line in the geomagnetic tail
NASA Technical Reports Server (NTRS)
Martin, R. F., Jr.; Johnson, D. F.; Speiser, T. W.
1991-01-01
An energetic ion signature is presented which has the potential for remote sensing of an O-type neutral line embedded in a current sheet. A source plasma with a tailward flowing Kappa distribution yields a strongly non-Kappa distribution after interacting with the neutral line: sharp jumps, or ridges, occur in the velocity space distribution function f(nu-perpendicular, nu-parallel) associated with both increases and decreases in f. The jumps occur when orbits are reversed in the x-direction: a reversal causing initially earthward particles (low probability in the source distribution) to be observed results in a decrease in f, while a reversal causing initially tailward particles to be observed produces an increase in f. The reversals, and hence the jumps, occur at approximately constant values of perpendicular velocity in both the positive nu parallel and negative nu parallel half planes. The results were obtained using single particle simulations in a fixed magnetic field model.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Probabilistic analysis of preload in the abutment screw of a dental implant complex.
Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R
2008-09-01
Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.
Comparative analysis through probability distributions of a data set
NASA Astrophysics Data System (ADS)
Cristea, Gabriel; Constantinescu, Dan Mihai
2018-02-01
In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.
Kepler Planet Reliability Metrics: Astrophysical Positional Probabilities for Data Release 25
NASA Technical Reports Server (NTRS)
Bryson, Stephen T.; Morton, Timothy D.
2017-01-01
This document is very similar to KSCI-19092-003, Planet Reliability Metrics: Astrophysical Positional Probabilities, which describes the previous release of the astrophysical positional probabilities for Data Release 24. The important changes for Data Release 25 are:1. The computation of the astrophysical positional probabilities uses the Data Release 25 processed pixel data for all Kepler Objects of Interest.2. Computed probabilities now have associated uncertainties, whose computation is described in x4.1.3.3. The scene modeling described in x4.1.2 uses background stars detected via ground-based high-resolution imaging, described in x5.1, that are not in the Kepler Input Catalog or UKIRT catalog. These newly detected stars are presented in Appendix B. Otherwise the text describing the algorithms and examples is largely unchanged from KSCI-19092-003.
Impact of temporal probability in 4D dose calculation for lung tumors.
Rouabhi, Ouided; Ma, Mingyu; Bayouth, John; Xia, Junyi
2015-11-08
The purpose of this study was to evaluate the dosimetric uncertainty in 4D dose calculation using three temporal probability distributions: uniform distribution, sinusoidal distribution, and patient-specific distribution derived from the patient respiratory trace. Temporal probability, defined as the fraction of time a patient spends in each respiratory amplitude, was evaluated in nine lung cancer patients. Four-dimensional computed tomography (4D CT), along with deformable image registration, was used to compute 4D dose incorporating the patient's respiratory motion. First, the dose of each of 10 phase CTs was computed using the same planning parameters as those used in 3D treatment planning based on the breath-hold CT. Next, deformable image registration was used to deform the dose of each phase CT to the breath-hold CT using the deformation map between the phase CT and the breath-hold CT. Finally, the 4D dose was computed by summing the deformed phase doses using their corresponding temporal probabilities. In this study, 4D dose calculated from the patient-specific temporal probability distribution was used as the ground truth. The dosimetric evaluation matrix included: 1) 3D gamma analysis, 2) mean tumor dose (MTD), 3) mean lung dose (MLD), and 4) lung V20. For seven out of nine patients, both uniform and sinusoidal temporal probability dose distributions were found to have an average gamma passing rate > 95% for both the lung and PTV regions. Compared with 4D dose calculated using the patient respiratory trace, doses using uniform and sinusoidal distribution showed a percentage difference on average of -0.1% ± 0.6% and -0.2% ± 0.4% in MTD, -0.2% ± 1.9% and -0.2% ± 1.3% in MLD, 0.09% ± 2.8% and -0.07% ± 1.8% in lung V20, -0.1% ± 2.0% and 0.08% ± 1.34% in lung V10, 0.47% ± 1.8% and 0.19% ± 1.3% in lung V5, respectively. We concluded that four-dimensional dose computed using either a uniform or sinusoidal temporal probability distribution can approximate four-dimensional dose computed using the patient-specific respiratory trace.
Bergstrom, Stig M.; Huff, W.D.; Kolata, Dennis R.
1998-01-01
A large number of Lower Silurian (Llandovery) K-bentonite beds have been recorded from northwestern Europe, particularly in Baltoscandia and the British Isles, but previous attempts to trace single beds regionally have yielded inconclusive results. The present study suggests that based on its unusual thickness, stratigraphic position and trace element geochemistry, one Telychian ash bed, the Osmundsberg K-bentonite, can be recognized at many localities in Estonia, Sweden and Norway and probably also in Scotland and Northern Ireland. This bed, which is up to 115 cm thick, is in the lower-middle turriculatus Zone. The stratigraphic position, thickness variation and geographic distribution of the Osmundsberg K-bentonite are illustrated by means of 12 selected Llandovery successions in Sweden, Estonia, Norway, Denmark, Scotland and Northern Ireland. In Baltoscandia, the Osmundsberg K-bentonite shows a trend of general thickness increase in a western direction suggesting that its source area was located in the northern Iapetus region between Baltica and Laurentia. Because large-magnitude ash falls like the one that produced the Osmundsberg K-bentonite last at most a few weeks, such an ash bed may be used as a unique time-plane for a variety of regional geological and palaeontological studies.
Testi, M; Andreani, M; Locatelli, F; Arcese, W; Troiano, M; Battarra, M; Gaziev, J; Lucarelli, G
2014-08-01
The information regarding the probability of finding a matched unrelated donor (MUD) within a relatively short time is crucial for the success of hematopoietic stem cell transplantation (HSCT), particularly in patients with malignancies. In this study, we retrospectively analyzed 315 Italian patients who started a search for a MUD, in order to assess the distribution of human leukocyte antigen (HLA) alleles and haplotypes in this population of patients and to evaluate the probability of finding a donor. Comparing two groups of patients based on whether or not a 10/10 HLA-matched donor was available, we found that patients who had a fully-matched MUD possessed at least one frequent haplotype more often than the others (45.6% vs 14.3%; P = 0.000003). In addition, analysis of data pertaining to the HLA class I alleles distribution showed that, in the first group of patients, less common alleles were under-represented (20.2% vs 40.0%; P = 0.006). Therefore, the presence of less frequent alleles represents a negative factor for the search for a potential compatible donor being successful, whereas the presence of one frequent haplotype represents a positive predictive factor. Antigenic differences between patient and donor observed at C and DQB1 loci, were mostly represented by particular B/C or DRB1/DQB1 allelic associations. Thus, having a particular B or DRB1 allele, linked to multiple C or DQB1 alleles, respectively, might be considered to be associated with a lower probability of a successful search. Taken together, these data may help determine in advance the probability of finding a suitable unrelated donor for an Italian patient. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Wilson, Edward C F; Usher-Smith, Juliet A; Emery, Jon; Corrie, Pippa G; Walter, Fiona M
2018-06-01
Expert elicitation is required to inform decision making when relevant "better quality" data either do not exist or cannot be collected. An example of this is to inform decisions as to whether to screen for melanoma. A key input is the counterfactual, in this case the natural history of melanoma in patients who are undiagnosed and hence untreated. To elicit expert opinion on the probability of disease progression in patients with melanoma that is undetected and hence untreated. A bespoke webinar-based expert elicitation protocol was administered to 14 participants in the United Kingdom, Australia, and New Zealand, comprising 12 multinomial questions on the probability of progression from one disease stage to another in the absence of treatment. A modified Connor-Mosimann distribution was fitted to individual responses to each question. Individual responses were pooled using a Monte-Carlo simulation approach. Participants were asked to provide feedback on the process. A pooled modified Connor-Mosimann distribution was successfully derived from participants' responses. Feedback from participants was generally positive, with 86% willing to take part in such an exercise again. Nevertheless, only 57% of participants felt that this was a valid approach to determine the risk of disease progression. Qualitative feedback reflected some understanding of the need to rely on expert elicitation in the absence of "hard" data. We successfully elicited and pooled the beliefs of experts in melanoma regarding the probability of disease progression in a format suitable for inclusion in a decision-analytic model. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Improved Membership Probability for Moving Groups: Bayesian and Machine Learning Approaches
NASA Astrophysics Data System (ADS)
Lee, Jinhee; Song, Inseok
2018-01-01
Gravitationally unbound loose stellar associations (i.e., young nearby moving groups: moving groups hereafter) have been intensively explored because they are important in planet and disk formation studies, exoplanet imaging, and age calibration. Among the many efforts devoted to the search for moving group members, a Bayesian approach (e.g.,using the code BANYAN) has become popular recently because of the many advantages it offers. However, the resultant membership probability needs to be carefully adopted because of its sensitive dependence on input models. In this study, we have developed an improved membership calculation tool focusing on the beta-Pic moving group. We made three improvements for building models used in BANYAN II: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZUVW. Our improved tool can change membership probability up to 70%. Membership probability is critical and must be better defined. For example, our code identifies only one third of the candidate members in SIMBAD that are believed to be kinematically associated with beta-Pic moving group.Additionally, we performed cluster analysis of young nearby stars using an unsupervised machine learning approach. As more moving groups and their members are identified, the complexity and ambiguity in moving group configuration has been increased. To clarify this issue, we analyzed ~4,000 X-ray bright young stellar candidates. Here, we present the preliminary results. By re-identifying moving groups with the least human intervention, we expect to understand the composition of the solar neighborhood. Moreover better defined moving group membership will help us understand star formation and evolution in relatively low density environments; especially for the low-mass stars which will be identified in the coming Gaia release.
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
Net present value probability distributions from decline curve reserves estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, D.E.; Huffman, C.H.; Thompson, R.S.
1995-12-31
This paper demonstrates how reserves probability distributions can be used to develop net present value (NPV) distributions. NPV probability distributions were developed from the rate and reserves distributions presented in SPE 28333. This real data study used practicing engineer`s evaluations of production histories. Two approaches were examined to quantify portfolio risk. The first approach, the NPV Relative Risk Plot, compares the mean NPV with the NPV relative risk ratio for the portfolio. The relative risk ratio is the NPV standard deviation (a) divided the mean ({mu}) NPV. The second approach, a Risk - Return Plot, is a plot of themore » {mu} discounted cash flow rate of return (DCFROR) versus the {sigma} for the DCFROR distribution. This plot provides a risk-return relationship for comparing various portfolios. These methods may help evaluate property acquisition and divestiture alternatives and assess the relative risk of a suite of wells or fields for bank loans.« less
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Aguilera, Moisés A; Valdivia, Nelson; Broitman, Bernardo R
2015-01-01
Understanding the impacts of consumers on the abundance, growth rate, recovery and persistence of their resources across their distributional range can shed light on the role of trophic interactions in determining species range shifts. Here, we examined if consumptive effects of the intertidal grazer Scurria viridula positively influences the abundance and recovery from disturbances of the alga Mazzaella laminarioides at the edge of its geographic distributions in northern-central Chilean rocky shores. Through field experiments conducted at a site in the region where M. laminarioides overlaps with the polar range edge of S. viridula, we estimated the effects of grazing on different life stages of M. laminarioides. We also used long-term abundance surveys conducted across ~700 km of the shore to evaluate co-occurrence patterns of the study species across their range overlap. We found that S. viridula had positive net effects on M. laminarioides by increasing its cover and re-growth from perennial basal crusts. Probability of occurrence of M. laminarioides increased significantly with increasing density of S. viridula across the range overlap. The negative effect of S. viridula on the percentage cover of opportunistic green algae-shown to compete for space with corticated algae-suggests that competitive release may be part of the mechanism driving the positive effect of the limpet on the abundance and recovery from disturbance of M. laminarioides. We suggest that grazer populations contribute to enhance the abundance of M. laminarioides, facilitating its recolonization and persistence at its distributional range edge. Our study highlights that indirect facilitation can determine the recovery and persistence of a resource at the limit of its distribution, and may well contribute to the ecological mechanisms governing species distributions and range shifts.
Aguilera, Moisés A.; Valdivia, Nelson; Broitman, Bernardo R.
2015-01-01
Understanding the impacts of consumers on the abundance, growth rate, recovery and persistence of their resources across their distributional range can shed light on the role of trophic interactions in determining species range shifts. Here, we examined if consumptive effects of the intertidal grazer Scurria viridula positively influences the abundance and recovery from disturbances of the alga Mazzaella laminarioides at the edge of its geographic distributions in northern-central Chilean rocky shores. Through field experiments conducted at a site in the region where M. laminarioides overlaps with the polar range edge of S. viridula, we estimated the effects of grazing on different life stages of M. laminarioides. We also used long-term abundance surveys conducted across ~700 km of the shore to evaluate co-occurrence patterns of the study species across their range overlap. We found that S. viridula had positive net effects on M. laminarioides by increasing its cover and re-growth from perennial basal crusts. Probability of occurrence of M. laminarioides increased significantly with increasing density of S. viridula across the range overlap. The negative effect of S. viridula on the percentage cover of opportunistic green algae—shown to compete for space with corticated algae—suggests that competitive release may be part of the mechanism driving the positive effect of the limpet on the abundance and recovery from disturbance of M. laminarioides. We suggest that grazer populations contribute to enhance the abundance of M. laminarioides, facilitating its recolonization and persistence at its distributional range edge. Our study highlights that indirect facilitation can determine the recovery and persistence of a resource at the limit of its distribution, and may well contribute to the ecological mechanisms governing species distributions and range shifts. PMID:26716986
Adaptive Optics Communications Performance Analysis
NASA Technical Reports Server (NTRS)
Srinivasan, M.; Vilnrotter, V.; Troy, M.; Wilson, K.
2004-01-01
The performance improvement obtained through the use of adaptive optics for deep-space communications in the presence of atmospheric turbulence is analyzed. Using simulated focal-plane signal-intensity distributions, uncoded pulse-position modulation (PPM) bit-error probabilities are calculated assuming the use of an adaptive focal-plane detector array as well as an adaptively sized single detector. It is demonstrated that current practical adaptive optics systems can yield performance gains over an uncompensated system ranging from approximately 1 dB to 6 dB depending upon the PPM order and background radiation level.
Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education.
Masel, J; Humphrey, P T; Blackburn, B; Levine, J A
2015-01-01
Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students' intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes' theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education. © 2015 J. Masel et al. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
An integrated logit model for contamination event detection in water distribution systems.
Housh, Mashor; Ostfeld, Avi
2015-05-15
The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Raghuram, Jayaram; Miller, David J; Kesidis, George
2014-07-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.
Raghuram, Jayaram; Miller, David J.; Kesidis, George
2014-01-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511
NASA Astrophysics Data System (ADS)
Mahanti, P.; Robinson, M. S.; Boyd, A. K.
2013-12-01
Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was computed over multiple scales. This slope analysis showed that local slope distributions are non-Gaussian for both crater walls and floors. Over larger baselines (~100 meters), crater wall slope probability distributions do approximate Gaussian distributions better, but have long distribution tails. Crater floor probability distributions however, were always asymmetric (for the baseline scales analyzed) and less affected by baseline scale variations. Accordingly, our results suggest that use of long tailed probability distributions (like Cauchy) and a baseline-dependant multi-scale model can be more effective in describing the slope statistics for lunar topography. Refrences: [1]Moore, H.(1971), JGR,75(11) [2]Marcus, A. H.(1969),JGR,74 (22).[3]R.J. Pike (1970),U.S. Geological Survey Working Paper [4]N. C. Costes, J. E. Farmer and E. B. George (1972),NASA Technical Report TR R-401 [5]M. N. Parker and G. L. Tyler(1973), Radio Science, 8(3),177-184 [6]Alekseev, V. A.et al (1968), Soviet Astronomy, Vol. 11, p.860 [7]Burns et al. (2012) Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXIX-B4, 483-488.[8]Smith et al. (2010) GRL 37, L18204, DOI: 10.1029/2010GL043751. [9]Wagner R., Robinson, M., Speyerer E., Mahanti, P., LPSC 2013, #2924.
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Vincent, Brett T.
1993-01-01
The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.
Probability and the changing shape of response distributions for orientation.
Anderson, Britt
2014-11-18
Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
Joint probabilities and quantum cognition
NASA Astrophysics Data System (ADS)
de Barros, J. Acacio
2012-12-01
In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.
Zhuang, Jiancang; Ogata, Yosihiko
2006-04-01
The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.
Li, Wen-Chang; Cooke, Tom; Sautois, Bart; Soffe, Stephen R; Borisyuk, Roman; Roberts, Alan
2007-09-10
How specific are the synaptic connections formed as neuronal networks develop and can simple rules account for the formation of functioning circuits? These questions are assessed in the spinal circuits controlling swimming in hatchling frog tadpoles. This is possible because detailed information is now available on the identity and synaptic connections of the main types of neuron. The probabilities of synapses between 7 types of identified spinal neuron were measured directly by making electrical recordings from 500 pairs of neurons. For the same neuron types, the dorso-ventral distributions of axons and dendrites were measured and then used to calculate the probabilities that axons would encounter particular dendrites and so potentially form synaptic connections. Surprisingly, synapses were found between all types of neuron but contact probabilities could be predicted simply by the anatomical overlap of their axons and dendrites. These results suggested that synapse formation may not require axons to recognise specific, correct dendrites. To test the plausibility of simpler hypotheses, we first made computational models that were able to generate longitudinal axon growth paths and reproduce the axon distribution patterns and synaptic contact probabilities found in the spinal cord. To test if probabilistic rules could produce functioning spinal networks, we then made realistic computational models of spinal cord neurons, giving them established cell-specific properties and connecting them into networks using the contact probabilities we had determined. A majority of these networks produced robust swimming activity. Simple factors such as morphogen gradients controlling dorso-ventral soma, dendrite and axon positions may sufficiently constrain the synaptic connections made between different types of neuron as the spinal cord first develops and allow functional networks to form. Our analysis implies that detailed cellular recognition between spinal neuron types may not be necessary for the reliable formation of functional networks to generate early behaviour like swimming.
Precision of EM Simulation Based Wireless Location Estimation in Multi-Sensor Capsule Endoscopy
Ye, Yunxing; Aisha, Ain-Ul; Swar, Pranay; Pahlavan, Kaveh
2018-01-01
In this paper, we compute and examine two-way localization limits for an RF endoscopy pill as it passes through an individuals gastrointestinal (GI) tract. We obtain finite-difference time-domain and finite element method-based simulation results position assessment employing time of arrival (TOA). By means of a 3-D human body representation from a full-wave simulation software and lognormal models for TOA propagation from implant organs to body surface, we calculate bounds on location estimators in three digestive organs: stomach, small intestine, and large intestine. We present an investigation of the causes influencing localization precision, consisting of a range of organ properties; peripheral sensor array arrangements, number of pills in cooperation, and the random variations in transmit power of sensor nodes. We also perform a localization precision investigation for the situation where the transmission signal of the antenna is arbitrary with a known probability distribution. The computational solver outcome shows that the number of receiver antennas on the exterior of the body has higher impact on the precision of the location than the amount of capsules in collaboration within the GI region. The large intestine is influenced the most by the transmitter power probability distribution. PMID:29651364
Tellez, Jason A; Schmidt, Jason D
2011-08-20
The propagation of a free-space optical communications signal through atmospheric turbulence experiences random fluctuations in intensity, including signal fades, which negatively impact the performance of the communications link. The gamma-gamma probability density function is commonly used to model the scintillation of a single beam. One proposed method to reduce the occurrence of scintillation-induced fades at the receiver plane involves the use of multiple beams propagating through independent paths, resulting in a sum of independent gamma-gamma random variables. Recently an analytical model for the probability distribution of irradiance from the sum of multiple independent beams was developed. Because truly independent beams are practically impossible to create, we present here a more general but approximate model for the distribution of beams traveling through partially correlated paths. This model compares favorably with wave-optics simulations and highlights the reduced scintillation as the number of transmitted beams is increased. Additionally, a pulse-position modulation scheme is used to reduce the impact of signal fades when they occur. Analytical and simulated results showed significantly improved performance when compared to fixed threshold on/off keying. © 2011 Optical Society of America
Pollen-based biomes for Beringia 18,000, 6000 and 0 14C yr BP
Edwards, M.E.; Anderson, P.M.; Brubaker, L.B.; Ager, T.A.; Andreev, A.A.; Bigelow, N.H.; Cwynar, L.C.; Eisner, Wendy R.; Harrison, S.P.; Hu, F.-S.; Jolly, D.; Lozhkin, A.V.; MacDonald, G.M.; Mock, Cary J.; Ritchie, J.C.; Sher, A.V.; Spear, R.W.; Williams, J.W.; Yu, G.
2000-01-01
The objective biomization method developed by Prentice et al. (1996) for Europe was extended using modern pollen samples from Beringia and then applied to fossil pollen data to reconstruct palaeovegetation patterns at 6000 and 18,000 14C yr BP. The predicted modern distribution of tundra, taiga and cool conifer forests in Alaska and north-western Canada generally corresponds well to actual vegetation patterns, although sites in regions characterized today by a mosaic of forest and tundra vegetation tend to be preferentially assigned to tundra. Siberian larch forests are delimited less well, probably due to the extreme under-representation of Larix in pollen spectra. The biome distribution across Beringia at 6000 14C yr BP was broadly similar to today, with little change in the northern forest limit, except for a possible northward-advance in the Mackenzie delta region. The western forest limit in Alaska was probably east of its modern position. At 18,000 14C yr BP the whole of Beringia was covered by tundra. However, the importance of the various plant functional types varied from site to site, supporting the idea that the vegetation cover was a mosaic of different tundra types.
Bayesian assessment of moving group membership: importance of models and prior knowledge
NASA Astrophysics Data System (ADS)
Lee, Jinhee; Song, Inseok
2018-04-01
Young nearby moving groups are important and useful in many fields of astronomy such as studying exoplanets, low-mass stars, and the stellar evolution of the early planetary systems over tens of millions of years, which has led to intensive searches for their members. Identification of members depends on the used models sensitively; therefore, careful examination of the models is required. In this study, we investigate the effects of the models used in moving group membership calculations based on a Bayesian framework (e.g. BANYAN II) focusing on the beta-Pictoris moving group (BPMG). Three improvements for building models are suggested: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZ and UVW. The effect of each change is investigated, and we suggest using all of these improvements simultaneously in future membership probability calculations. Using this improved MG membership calculation and the careful examination of the age, 57 bona fide members of BPMG are confirmed including 12 new members. We additionally suggest 17 highly probable members.
Precision of EM Simulation Based Wireless Location Estimation in Multi-Sensor Capsule Endoscopy.
Khan, Umair; Ye, Yunxing; Aisha, Ain-Ul; Swar, Pranay; Pahlavan, Kaveh
2018-01-01
In this paper, we compute and examine two-way localization limits for an RF endoscopy pill as it passes through an individuals gastrointestinal (GI) tract. We obtain finite-difference time-domain and finite element method-based simulation results position assessment employing time of arrival (TOA). By means of a 3-D human body representation from a full-wave simulation software and lognormal models for TOA propagation from implant organs to body surface, we calculate bounds on location estimators in three digestive organs: stomach, small intestine, and large intestine. We present an investigation of the causes influencing localization precision, consisting of a range of organ properties; peripheral sensor array arrangements, number of pills in cooperation, and the random variations in transmit power of sensor nodes. We also perform a localization precision investigation for the situation where the transmission signal of the antenna is arbitrary with a known probability distribution. The computational solver outcome shows that the number of receiver antennas on the exterior of the body has higher impact on the precision of the location than the amount of capsules in collaboration within the GI region. The large intestine is influenced the most by the transmitter power probability distribution.
Identifying Changes in the Probability of High Temperature, High Humidity Heat Wave Events
NASA Astrophysics Data System (ADS)
Ballard, T.; Diffenbaugh, N. S.
2016-12-01
Understanding how heat waves will respond to climate change is critical for adequate planning and adaptation. While temperature is the primary determinant of heat wave severity, humidity has been shown to play a key role in heat wave intensity with direct links to human health and safety. Here we investigate the individual contributions of temperature and specific humidity to extreme heat wave conditions in recent decades. Using global NCEP-DOE Reanalysis II daily data, we identify regional variability in the joint probability distribution of humidity and temperature. We also identify a statistically significant positive trend in humidity over the eastern U.S. during heat wave events, leading to an increased probability of high humidity, high temperature events. The extent to which we can expect this trend to continue under climate change is complicated due to variability between CMIP5 models, in particular among projections of humidity. However, our results support the notion that heat wave dynamics are characterized by more than high temperatures alone, and understanding and quantifying the various components of the heat wave system is crucial for forecasting future impacts.
NASA Astrophysics Data System (ADS)
Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.
2018-04-01
We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O.
2017-01-01
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks. PMID:28555023
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks.
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O
2017-05-27
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks.
Mitigating clogging and arrest in confined self-propelled systems
NASA Astrophysics Data System (ADS)
Savoie, William; Aguilar, Jeffrey; Monaenkova, Daria; Linevich, Vadim; Goldman, Daniel
Ensembles of self-propelling elements, like colloidal surfers, bacterial biofilms, and robot swarms can spontaneously form density heterogeneities. To understand how to prevent potentially catastrophic clogs in task-oriented active matter systems (like soil excavating robots), we present a robophysical study of excavation of granular media in a confined environment. We probe the efficacy of two social strategies observed in our studies of fire ants (S. invicta). The first behavior (denoted as unequal workload) prescribes to each excavator a different probability to enter the digging area. The second behavior (denoted as reversal\\x9D), is characterized by a probability to forfeit excavation when progress is sufficiently obstructed. For equal workload distribution and no reversal behavior, clogs at the digging site prevent excavation for sufficient numbers of robots. Measurements of aggregation relaxation times reveal how the strategies mitigate clogs. The unequal workload behavior reduces the tunnel density, decreasing the probability of clog formation. Reversal behavior, while allowing clogs to form, reduces aggregation relaxation time. We posit that application of social behaviors can be useful for swarm robot systems where global control and organization may not be possible.
Impact of contrarians and intransigents in a kinetic model of opinion dynamics
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Blanco, Victor H.; Anteneodo, Celia
2014-01-01
In this work we study opinion formation on a fully connected population participating of a public debate with two distinct choices, where the agents may adopt three different attitudes (favorable to either one choice or to the other, or undecided). The interactions between agents occur by pairs and are competitive, with couplings that are either negative with probability p or positive with probability 1-p. This bimodal probability distribution of couplings produces a behavior similar to the one resulting from the introduction of Galam's contrarians in the population. In addition, we consider that a fraction d of the individuals are intransigent, that is, reluctant to change their opinions. The consequences of the presence of contrarians and intransigents are studied by means of computer simulations. Our results suggest that the presence of inflexible agents affects the critical behavior of the system, causing either the shift of the critical point or the suppression of the ordering phase transition, depending on the groups of opinions to which the intransigents belong. We also discuss the relevance of the model for real social systems.
Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet
2016-12-01
Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
Quantitative assessment of building fire risk to life safety.
Guanquan, Chu; Jinhua, Sun
2008-06-01
This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Sinner, Jim; Ellis, Joanne; Kandlikar, Milind; Halpern, Benjamin S.; Satterfield, Terre; Chan, Kai
2017-01-01
The elicitation of expert judgment is an important tool for assessment of risks and impacts in environmental management contexts, and especially important as decision-makers face novel challenges where prior empirical research is lacking or insufficient. Evidence-driven elicitation approaches typically involve techniques to derive more accurate probability distributions under fairly specific contexts. Experts are, however, prone to overconfidence in their judgements. Group elicitations with diverse experts can reduce expert overconfidence by allowing cross-examination and reassessment of prior judgements, but groups are also prone to uncritical “groupthink” errors. When the problem context is underspecified the probability that experts commit groupthink errors may increase. This study addresses how structured workshops affect expert variability among and certainty within responses in a New Zealand case study. We find that experts’ risk estimates before and after a workshop differ, and that group elicitations provided greater consistency of estimates, yet also greater uncertainty among experts, when addressing prominent impacts to four different ecosystem services in coastal New Zealand. After group workshops, experts provided more consistent ranking of risks and more consistent best estimates of impact through increased clarity in terminology and dampening of extreme positions, yet probability distributions for impacts widened. The results from this case study suggest that group elicitations have favorable consequences for the quality and uncertainty of risk judgments within and across experts, making group elicitation techniques invaluable tools in contexts of limited data. PMID:28767694
Active Longitude and Coronal Mass Ejection Occurrences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyenge, N.; Kiss, T. S.; Erdélyi, R.
The spatial inhomogeneity of the distribution of coronal mass ejection (CME) occurrences in the solar atmosphere could provide a tool to estimate the longitudinal position of the most probable CME-capable active regions in the Sun. The anomaly in the longitudinal distribution of active regions themselves is often referred to as active longitude (AL). In order to reveal the connection between the AL and CME spatial occurrences, here we investigate the morphological properties of active regions. The first morphological property studied is the separateness parameter, which is able to characterize the probability of the occurrence of an energetic event, such asmore » a solar flare or CME. The second morphological property is the sunspot tilt angle. The tilt angle of sunspot groups allows us to estimate the helicity of active regions. The increased helicity leads to a more complex buildup of the magnetic structure and also can cause CME eruption. We found that the most complex active regions appear near the AL and that the AL itself is associated with the most tilted active regions. Therefore, the number of CME occurrences is higher within the AL. The origin of the fast CMEs is also found to be associated with this region. We concluded that the source of the most probably CME-capable active regions is at the AL. By applying this method, we can potentially forecast a flare and/or CME source several Carrington rotations in advance. This finding also provides new information for solar dynamo modeling.« less
Active Longitude and Coronal Mass Ejection Occurrences
NASA Astrophysics Data System (ADS)
Gyenge, N.; Singh, T.; Kiss, T. S.; Srivastava, A. K.; Erdélyi, R.
2017-03-01
The spatial inhomogeneity of the distribution of coronal mass ejection (CME) occurrences in the solar atmosphere could provide a tool to estimate the longitudinal position of the most probable CME-capable active regions in the Sun. The anomaly in the longitudinal distribution of active regions themselves is often referred to as active longitude (AL). In order to reveal the connection between the AL and CME spatial occurrences, here we investigate the morphological properties of active regions. The first morphological property studied is the separateness parameter, which is able to characterize the probability of the occurrence of an energetic event, such as a solar flare or CME. The second morphological property is the sunspot tilt angle. The tilt angle of sunspot groups allows us to estimate the helicity of active regions. The increased helicity leads to a more complex buildup of the magnetic structure and also can cause CME eruption. We found that the most complex active regions appear near the AL and that the AL itself is associated with the most tilted active regions. Therefore, the number of CME occurrences is higher within the AL. The origin of the fast CMEs is also found to be associated with this region. We concluded that the source of the most probably CME-capable active regions is at the AL. By applying this method, we can potentially forecast a flare and/or CME source several Carrington rotations in advance. This finding also provides new information for solar dynamo modeling.
Calculation of a fluctuating entropic force by phase space sampling.
Waters, James T; Kim, Harold D
2015-07-01
A polymer chain pinned in space exerts a fluctuating force on the pin point in thermal equilibrium. The average of such fluctuating force is well understood from statistical mechanics as an entropic force, but little is known about the underlying force distribution. Here, we introduce two phase space sampling methods that can produce the equilibrium distribution of instantaneous forces exerted by a terminally pinned polymer. In these methods, both the positions and momenta of mass points representing a freely jointed chain are perturbed in accordance with the spatial constraints and the Boltzmann distribution of total energy. The constraint force for each conformation and momentum is calculated using Lagrangian dynamics. Using terminally pinned chains in space and on a surface, we show that the force distribution is highly asymmetric with both tensile and compressive forces. Most importantly, the mean of the distribution, which is equal to the entropic force, is not the most probable force even for long chains. Our work provides insights into the mechanistic origin of entropic forces, and an efficient computational tool for unbiased sampling of the phase space of a constrained system.
NASA Astrophysics Data System (ADS)
Qian, D. B.; Shi, F. D.; Chen, L.; Martin, S.; Bernard, J.; Yang, J.; Zhang, S. F.; Chen, Z. Q.; Zhu, X. L.; Ma, X.
2018-04-01
We propose an approach to determine the excitation energy distribution due to multiphoton absorption in the case of excited systems following decays to produce different ion species. This approach is based on the measurement of the time-resolved photoion position spectrum by using velocity map imaging spectrometry and an unfocused laser beam with a low fluence and homogeneous profile. Such a measurement allows us to identify the species and the origin of each ion detected and to depict the energy distribution using a pure Poisson's equation involving only one variable which is proportional to the absolute photon absorption cross section. A cascade decay model is used to build direct connections between the energy distribution and the probability to detect each ionic species. Comparison between experiments and simulations permits the energy distribution and accordingly the absolute photon absorption cross section to be determined. This approach is illustrated using C60 as an example. It may therefore be extended to a wide variety of molecules and clusters having decay mechanisms similar to those of fullerene molecules.
On probability-possibility transformations
NASA Technical Reports Server (NTRS)
Klir, George J.; Parviz, Behzad
1992-01-01
Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
Newton/Poisson-Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.
1990-01-01
NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. © 2012 Optical Society of America
St John, Heidi K; Adams, Melissa L; Masuoka, Penny M; Flyer-Adams, Johanna G; Jiang, Ju; Rozmajzl, Patrick J; Stromdahl, Ellen Y; Richards, Allen L
2016-04-01
Rickettsia montanensis has long been considered a nonpathogenic member of the spotted fever group rickettsiae. However, the infection potential of R. montanensis is being revisited in light of its recent association with a case of human infection in the United States and the possibility that additional cases may have been misdiagnosed as Rocky Mountain spotted fever. To this end, DNA was extracted from American dog ticks (Dermacentor variabilis) removed from Department of Defense (DoD) personnel and their dependents at DoD medical treatment facilities (MTFs) during 2002-2012 (n = 4792). These 4792 samples were analyzed for the presence of R. montanensis (n = 36; 2.84%) and all vector DNA was confirmed to be of D. variabilis origin using a novel Dermacentor genus-specific quantitative real-time polymerase chain reaction procedure, Derm, and a novel Dermacentor species multilocus sequence typing assay. To assess the risk of R. montanensis infection, the positive and negative samples were geographically mapped utilizing MTF site locations. Tick localities were imported into a geographical information systems (GIS) program, ArcGIS, for mapping and analysis. The ecological niche modeling (ENM) program, Maxent, was used to estimate the probability of tick presence in eastern United States using locations of both R. montanensis-positive and -negative ticks, climate, and elevation data. The ENM for R. montanensis-positive D. variabilis estimated high probabilities of the positive ticks occurring in two main areas, including the northern Midwest and mid-Atlantic portions of the northeastern regions of United States, whereas the R. montanensis-negative D. variabilis tick model showed a wider estimated range. The results suggest that R. montanensis-positive and -negative D. variabilis have different ranges where humans may be at risk and are influenced by similar and different factors.
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. © 2012 Society for Risk Analysis.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
2017-03-27
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Can we expect to predict climate if we cannot shadow weather?
NASA Astrophysics Data System (ADS)
Smith, Leonard
2010-05-01
What limits our ability to predict (or project) useful statistics of future climate? And how might we quantify those limits? In the early 1960s, Ed Lorenz illustrated one constraint on point forecasts of the weather (chaos) while noting another (model imperfections). In the mid-sixties he went on to discuss climate prediction, noting that chaos, per se, need not limit accurate forecasts of averages and the distributions that define climate. In short, chaos might place draconian limits on what we can say about a particular summer day in 2010 (or 2040), but it need not limit our ability to make accurate and informative statements about the weather over this summer as a whole, or climate distributions of the 2040's. If not chaos, what limits our ability to produce decision relevant probability distribution functions (PDFs)? Is this just a question of technology (raw computer power) and uncertain boundary conditions (emission scenarios)? Arguably, current model simulations of the Earth's climate are limited by model inadequacy: not that the initial or boundary conditions are unknown but that state-of-the-art models would not yield decision-relevant probability distributions even if they were known. Or to place this statement in an empirically falsifiable format: that in 2100 when the boundary conditions are known and computer power is (hopefully) sufficient to allow exhaustive exploration of today's state-of-the-art models: we will find today's models do not admit a trajectory consistent with our knowledge of the state of the earth in 2009 which would prove of decision support relevance for, say, 25 km, hourly resolution. In short: today's models cannot shadow the weather of this century even after the fact. Restating this conjecture in a more positive frame: a 2100 historian of science will be able to determine the highest space and time scales on which 2009 models could have (i) produced trajectories plausibly consistent with the (by then) observed twenty-first century and (ii) produced probability distributions useful as such for decision support. As it will be some time until such conjectures can be refuted, how might we best advise decision makers of the detail (specifically, space and time resolution of a quantity of interest as a function of lead-time) that it is rational to interpret model-based PDFs as decision-relevant probability distributions? Given the nonlinearities already incorporated in our models, how far into the future can one expect a simulation to get the temperature "right" given the simulation has precipitation badly "wrong"? When can biases in local temperature which melt model-ice no longer be dismissed, and neglected by presenting model-anomalies? At what lead times will feedbacks due to model inadequacies cause the 2007 model simulations to drift away from what today's basic science (and 2100 computer power) would suggest? How might one justify quantitative claims regarding "extreme events" (or NUMB weather)? Models are unlikely to forecast things they cannot shadow, or at least track. There is no constraint on rational scientists to take model distributions as their subjective probabilities, unless they believe the model is empirically adequate. How then are we to use today's simulations to inform today's decisions? Two approaches are considered. The first augments the model-based PDF with an explicit subjective-probability of a "Big Surprise". The second is to look not for a PDF but, following Solvency II, consider the risk from any event that cannot be ruled out at, say, the one in 200 level. The fact that neither approach provides the simplicity and apparent confidence of interpreting model-based PDFs as if they were objective probabilities does not contradict the claim that either might lead to better decision-making.
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu
In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less
NASA Astrophysics Data System (ADS)
Jenkins, Colleen; Jordan, Jay; Carlson, Jeff
2007-02-01
This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.
Optical detection of chemical warfare agents and toxic industrial chemicals
NASA Astrophysics Data System (ADS)
Webber, Michael E.; Pushkarsky, Michael B.; Patel, C. Kumar N.
2004-12-01
We present an analytical model evaluating the suitability of optical absorption based spectroscopic techniques for detection of chemical warfare agents (CWAs) and toxic industrial chemicals (TICs) in ambient air. The sensor performance is modeled by simulating absorption spectra of a sample containing both the target and multitude of interfering species as well as an appropriate stochastic noise and determining the target concentrations from the simulated spectra via a least square fit (LSF) algorithm. The distribution of the LSF target concentrations determines the sensor sensitivity, probability of false positives (PFP) and probability of false negatives (PFN). The model was applied to CO2 laser based photoacosutic (L-PAS) CWA sensor and predicted single digit ppb sensitivity with very low PFP rates in the presence of significant amount of interferences. This approach will be useful for assessing sensor performance by developers and users alike; it also provides methodology for inter-comparison of different sensing technologies.
Risk and utility in portfolio optimization
NASA Astrophysics Data System (ADS)
Cohen, Morrel H.; Natoli, Vincent D.
2003-06-01
Modern portfolio theory (MPT) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets. In the original mean-variance approach of Markowitz, volatility is taken as a proxy for risk, conflating uncertainty with risk. There have been many subsequent attempts to alleviate that weakness which, typically, combine utility and risk. We present here a modification of MPT based on the inclusion of separate risk and utility criteria. We define risk as the probability of failure to meet a pre-established investment goal. We define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield. The emphasis throughout is on long investment horizons for which risk-free assets do not exist. Analytic results are presented for a Gaussian probability distribution. Risk-utility relations are explored via empirical stock-price data, and an illustrative portfolio is optimized using the empirical data.
Nuclear energy release from fragmentation
NASA Astrophysics Data System (ADS)
Li, Cheng; Souza, S. R.; Tsang, M. B.; Zhang, Feng-Shou
2016-08-01
It is well known that binary fission occurs with positive energy gain. In this article we examine the energetics of splitting uranium and thorium isotopes into various numbers of fragments (from two to eight) with nearly equal size. We find that the energy released by splitting 230,232Th and 235,238U into three equal size fragments is largest. The statistical multifragmentation model (SMM) is applied to calculate the probability of different breakup channels for excited nuclei. By weighing the probability distributions of fragment multiplicity at different excitation energies, we find the peaks of energy release for 230,232Th and 235,238U are around 0.7-0.75 MeV/u at excitation energy between 1.2 and 2 MeV/u in the primary breakup process. Taking into account the secondary de-excitation processes of primary fragments with the GEMINI code, these energy peaks fall to about 0.45 MeV/u.
Survival probability of diffusion with trapping in cellular neurobiology
NASA Astrophysics Data System (ADS)
Holcman, David; Marchewka, Avi; Schuss, Zeev
2005-09-01
The problem of diffusion with absorption and trapping sites arises in the theory of molecular signaling inside and on the membranes of biological cells. In particular, this problem arises in the case of spine-dendrite communication, where the number of calcium ions, modeled as random particles, is regulated across the spine microstructure by pumps, which play the role of killing sites, while the end of the dendritic shaft is an absorbing boundary. We develop a general mathematical framework for diffusion in the presence of absorption and killing sites and apply it to the computation of the time-dependent survival probability of ions. We also compute the ratio of the number of absorbed particles at a specific location to the number of killed particles. We show that the ratio depends on the distribution of killing sites. The biological consequence is that the position of the pumps regulates the fraction of calcium ions that reach the dendrite.
Failure-probability driven dose painting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.
Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). Themore » total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity.« less
Kairisto, V; Poola, A
1995-01-01
GraphROC for Windows is a program for clinical test evaluation. It was designed for the handling of large datasets obtained from clinical laboratory databases. In the user interface, graphical and numerical presentations are combined. For simplicity, numerical data is not shown unless requested. Relevant numbers can be "picked up" from the graph by simple mouse operations. Reference distributions can be displayed by using automatically optimized bin widths. Any percentile of the distribution with corresponding confidence limits can be chosen for display. In sensitivity-specificity analysis, both illness- and health-related distributions are shown in the same graph. The following data for any cutoff limit can be shown in a separate click window: clinical sensitivity and specificity with corresponding confidence limits, positive and negative likelihood ratios, positive and negative predictive values and efficiency. Predictive values and clinical efficiency of the cutoff limit can be updated for any prior probability of disease. Receiver Operating Characteristics (ROC) curves can be generated and combined into the same graph for comparison of several different tests. The area under the curve with corresponding confidence interval is calculated for each ROC curve. Numerical results of analyses and graphs can be printed or exported to other Microsoft Windows programs. GraphROC for Windows also employs a new method, developed by us, for the indirect estimation of health-related limits and change limits from mixed distributions of clinical laboratory data.
[Hepatitis B case grouping serological study among six chinese families in Almeria, Spain].
Barroso García, Pilar; Lucerna Méndez, M Angeles; Adrián Monforte, Estrella; Parrón Carreño, Tesifón
2004-01-01
Following the detection of two cases of members of 6 Chinese families having tested positive for the hepatitis B virus, a study of those living in these families was begun for the purpose of knowing the spread of the infection within the family environment of the cases detected. Descriptive study. Population under study: 24 members of six Chinese families. Age, sex, serological diagnosis, risk factors, healthcare-related attitude. Clinical records, serological data, epidemiological survey and immunization cards. A family focus was employed and the genogram used. Distribution Binomial spread for calculating probability of occurrence of the process to be studied. A total of 14 males (58.3%) and 10 females (41.7%) ranking from 1 to 54 years of age were studied. The age group having the largest number of subjects studied was the age 21-30 group (37.5%). Twelve chronic hepatitis B infections were recorded (50%). No relationship was found to exist with the risk factors studied in the epidemiological survey conducted. The probability of this number of chronic hepatitis cases occurring was 0.066 x 10(-6). It was concluded that the prevalence of infection found was probable due to intra-family transmission. Given the low probability of occurrence of a process of this type, the case grouping found is considered to be high.
NASA Astrophysics Data System (ADS)
Andersen, Christian Walther; Sibani, Paolo
2016-05-01
Based on the stochastic dynamics of interacting agents which reproduce, mutate, and die, the tangled nature model (TNM) describes key emergent features of biological and cultural ecosystems' evolution. While trait inheritance is not included in many applications, i.e., the interactions of an agent and those of its mutated offspring are taken to be uncorrelated, in the family of TNMs introduced in this work correlations of varying strength are parametrized by a positive integer K . We first show that the interactions generated by our rule are nearly independent of K . Consequently, the structural and dynamical effects of trait inheritance can be studied independently of effects related to the form of the interactions. We then show that changing K strengthens the core structure of the ecology, leads to population abundance distributions better approximated by log-normal probability densities, and increases the probability that a species extant at time tw also survives at t >tw . Finally, survival probabilities of species are shown to decay as powers of the ratio t /tw , a so-called pure aging behavior usually seen in glassy systems of physical origin. We find a quantitative dynamical effect of trait inheritance, namely, that increasing the value of K numerically decreases the decay exponent of the species survival probability.
Andersen, Christian Walther; Sibani, Paolo
2016-05-01
Based on the stochastic dynamics of interacting agents which reproduce, mutate, and die, the tangled nature model (TNM) describes key emergent features of biological and cultural ecosystems' evolution. While trait inheritance is not included in many applications, i.e., the interactions of an agent and those of its mutated offspring are taken to be uncorrelated, in the family of TNMs introduced in this work correlations of varying strength are parametrized by a positive integer K. We first show that the interactions generated by our rule are nearly independent of K. Consequently, the structural and dynamical effects of trait inheritance can be studied independently of effects related to the form of the interactions. We then show that changing K strengthens the core structure of the ecology, leads to population abundance distributions better approximated by log-normal probability densities, and increases the probability that a species extant at time t_{w} also survives at t>t_{w}. Finally, survival probabilities of species are shown to decay as powers of the ratio t/t_{w}, a so-called pure aging behavior usually seen in glassy systems of physical origin. We find a quantitative dynamical effect of trait inheritance, namely, that increasing the value of K numerically decreases the decay exponent of the species survival probability.
p-adic stochastic hidden variable model
NASA Astrophysics Data System (ADS)
Khrennikov, Andrew
1998-03-01
We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.
Pinzón-Sánchez, C; Cabrera, V E; Ruegg, P L
2011-04-01
The objective of this study was to develop a decision tree to evaluate the economic impact of different durations of intramammary treatment for the first case of mild or moderate clinical mastitis (CM) occurring in early lactation with various scenarios of pathogen distributions and use of on-farm culture. The tree included 2 decision and 3 probability events. The first decision evaluated use of on-farm culture (OFC; 2 programs using OFC and 1 not using OFC) and the second decision evaluated treatment strategies (no intramammary antimicrobials or antimicrobials administered for 2, 5, or 8 d). The tree included probabilities for the distribution of etiologies (gram-positive, gram-negative, or no growth), bacteriological cure, and recurrence. The economic consequences of mastitis included costs of diagnosis and initial treatment, additional treatments, labor, discarded milk, milk production losses due to clinical and subclinical mastitis, culling, and transmission of infection to other cows (only for CM caused by Staphylococcus aureus). Pathogen-specific estimates for bacteriological cure and milk losses were used. The economically optimal path for several scenarios was determined by comparison of expected monetary values. For most scenarios, the optimal economic strategy was to treat CM caused by gram-positive pathogens for 2 d and to avoid antimicrobials for CM cases caused by gram-negative pathogens or when no pathogen was recovered. Use of extended intramammary antimicrobial therapy (5 or 8 d) resulted in the least expected monetary values. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Anomalous, non-Gaussian tracer diffusion in crowded two-dimensional environments
NASA Astrophysics Data System (ADS)
Ghosh, Surya K.; Cherstvy, Andrey G.; Grebenkov, Denis S.; Metzler, Ralf
2016-01-01
A topic of intense current investigation pursues the question of how the highly crowded environment of biological cells affects the dynamic properties of passively diffusing particles. Motivated by recent experiments we report results of extensive simulations of the motion of a finite sized tracer particle in a heterogeneously crowded environment made up of quenched distributions of monodisperse crowders of varying sizes in finite circular two-dimensional domains. For given spatial distributions of monodisperse crowders we demonstrate how anomalous diffusion with strongly non-Gaussian features arises in this model system. We investigate both biologically relevant situations of particles released either at the surface of an inner domain or at the outer boundary, exhibiting distinctly different features of the observed anomalous diffusion for heterogeneous distributions of crowders. Specifically we reveal an asymmetric spreading of tracers even at moderate crowding. In addition to the mean squared displacement (MSD) and local diffusion exponent we investigate the magnitude and the amplitude scatter of the time averaged MSD of individual tracer trajectories, the non-Gaussianity parameter, and the van Hove correlation function. We also quantify how the average tracer diffusivity varies with the position in the domain with a heterogeneous radial distribution of crowders and examine the behaviour of the survival probability and the dynamics of the tracer survival probability. Inter alia, the systems we investigate are related to the passive transport of lipid molecules and proteins in two-dimensional crowded membranes or the motion in colloidal solutions or emulsions in effectively two-dimensional geometries, as well as inside supercrowded, surface adhered cells.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Yongjun; Lu, Zhixin
2017-10-01
Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.
NASA Astrophysics Data System (ADS)
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
NASA Astrophysics Data System (ADS)
Lee, Jaeha; Tsutsui, Izumi
2017-05-01
We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Universal laws of human society's income distribution
NASA Astrophysics Data System (ADS)
Tao, Yong
2015-10-01
General equilibrium equations in economics play the same role with many-body Newtonian equations in physics. Accordingly, each solution of the general equilibrium equations can be regarded as a possible microstate of the economic system. Since Arrow's Impossibility Theorem and Rawls' principle of social fairness will provide a powerful support for the hypothesis of equal probability, then the principle of maximum entropy is available in a just and equilibrium economy so that an income distribution will occur spontaneously (with the largest probability). Remarkably, some scholars have observed such an income distribution in some democratic countries, e.g. USA. This result implies that the hypothesis of equal probability may be only suitable for some "fair" systems (economic or physical systems). From this meaning, the non-equilibrium systems may be "unfair" so that the hypothesis of equal probability is unavailable.
Polynomial chaos representation of databases on manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2017-04-15
Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less
Gravitational lensing, time delay, and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1992-01-01
The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.
NASA Astrophysics Data System (ADS)
Villanueva, Anthony Allan D.
2018-02-01
We discuss a class of solutions of the time-dependent Schrödinger equation such that the position uncertainty temporarily decreases. This self-focusing or contractive behavior is a consequence of the anti-correlation of the position and momentum observables. Since the associated position density satisfies a continuity equation, upon contraction the probability current at a given fixed point may flow in the opposite direction of the group velocity of the wave packet. For definiteness, we consider a free particle incident from the left of the origin, and establish a condition for the initial position-momentum correlation such that a negative probability current at the origin is possible. This implies a decrease in the particle's detection probability in the region x > 0, and we calculate how long this occurs. Analogous results are obtained for a particle subject to a uniform gravitational force if we consider the particle approaching the turning point. We show that position-momentum anti-correlation may cause a negative probability current at the turning point, leading to a temporary decrease in the particle's detection probability in the classically forbidden region.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration. © 2013 Elsevier B.V. All rights reserved.
Theoretical size distribution of fossil taxa: analysis of a null model
Reed, William J; Hughes, Barry D
2007-01-01
Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Probability distributions of continuous measurement results for conditioned quantum evolution
NASA Astrophysics Data System (ADS)
Franquet, A.; Nazarov, Yuli V.
2017-02-01
We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.
Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.
2016-01-01
The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733
Reske, Kimberly A.; Hink, Tiffany; Dubberke, Erik R.
2016-01-01
ABSTRACT The objective of this study was to evaluate the clinical characteristics and outcomes of hospitalized patients tested for Clostridium difficile and determine the correlation between pretest probability for C. difficile infection (CDI) and assay results. Patients with testing ordered for C. difficile were enrolled and assigned a high, medium, or low pretest probability of CDI based on clinical evaluation, laboratory, and imaging results. Stool was tested for C. difficile by toxin enzyme immunoassay (EIA) and toxigenic culture (TC). Chi-square analyses and the log rank test were utilized. Among the 111 patients enrolled, stool samples from nine were TC positive and four were EIA positive. Sixty-one (55%) patients had clinically significant diarrhea, 19 (17%) patients did not, and clinically significant diarrhea could not be determined for 31 (28%) patients. Seventy-two (65%) patients were assessed as having a low pretest probability of having CDI, 34 (31%) as having a medium probability, and 5 (5%) as having a high probability. None of the patients with low pretest probabilities had a positive EIA, but four were TC positive. None of the seven patients with a positive TC but a negative index EIA developed CDI within 30 days after the index test or died within 90 days after the index toxin EIA date. Pretest probability for CDI should be considered prior to ordering C. difficile testing and must be taken into account when interpreting test results. CDI is a clinical diagnosis supported by laboratory data, and the detection of toxigenic C. difficile in stool does not necessarily confirm the diagnosis of CDI. PMID:27927930
A discussion on the origin of quantum probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel
We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Hybrid Approaches and Industrial Applications of Pattern Recognition,
1980-10-01
emphasized that the probability distribution in (9) is correct only under the assumption that P( wIx ) is known exactly. In practice this assumption will...sufficient precision. The alternative would be to take the probability distribution of estimates of P( wix ) into account in the analysis. However, from the
Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.
ERIC Educational Resources Information Center
Egghe, Leo; Rousseau, Ronald
1995-01-01
Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…
NASA Astrophysics Data System (ADS)
Boslough, M.
2011-12-01
Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based global temperature anomaly data published by NASS GISS. Typical climate contracts predict the probability of a specified future temperature, but not the probability density or best estimate. One way to generate a probability distribution would be to create a family of contracts over a range of specified temperatures and interpret the price of each contract as its exceedance probability. The resulting plot of probability vs. anomaly is the market-based cumulative density function. The best estimate can be determined by interpolation, and the market-based uncertainty estimate can be based on the spread. One requirement for an effective prediction market is liquidity. Climate contracts are currently considered somewhat of a novelty and often lack sufficient liquidity, but climate change has the potential to generate both tremendous losses for some (e.g. agricultural collapse and extreme weather events) and wealth for others (access to natural resources and trading routes). Use of climate markets by large stakeholders has the potential to generate the liquidity necessary to make them viable. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M
2018-03-13
Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.
NASA Technical Reports Server (NTRS)
Savage, B. D.
1975-01-01
Ultraviolet extinction bumps are investigated in the interstellar extinction curves between 1800 and 3600 A for 36 stars which have (B-V) excesses ranging from 0.03 to 0.55 and are mostly confined to the brighter OB associations distributed along the galactic plane. Each extinction curve is found to have a broad bump which peaks near 2175 A and whose position and profile appear to be constant among all the stars. It is shown that the bump is probably interstellar in origin and that the constancy of its position and shape places such severe restrictions on grain geometrical parameters that classical scattering theory cannot be used to explain the feature unless the dust grains in widely separated regions of space and with very different physical conditions are assumed to have nearly identical size and shape distributions. Three extinction curves which extend to 1100 A are examined and found to have the same general characteristics as the others. Several extinction curves are analyzed for fine structure, but no convincing evidence is found in the present interval. Some processes are discussed which may be responsible for the bumps.
Insight into the split and asymmetry of charge distribution in biased M-structure superlattice
NASA Astrophysics Data System (ADS)
Liu, Lu; Bi, Han; Zhao, Yunhao; Zhao, Xuebing; Han, Xi; Wang, Guowei; Xu, Yingqiang; Li, Yuesheng; Che, Renchao
2017-07-01
The charge distribution in real space of an insertion variant based on an InAs/GaSb superlattice for an infrared detector is illustrated by in situ electron microscopy. The localization split of positive charge can be directly observed in the InAs/GaSb/AlSb/GaSb superlattice (M-structure) rather than in the InAs/GaSb superlattice. With the applied bias increasing from 0 to 4.5 V, the double peaks of positive charge density become asymmetrical gradually, with the peak integral ratio ranging from 1.13 to 2.54. Simultaneously, the negative charges move along the direction of the negative electric field. Without inserting the AlSb layer, the charge inversion occurs in both the hole wells and the electron wells of the InAs/GaSb superlattice under high bias. Such a discrepancy between the M-structure superlattice and the traditional superlattice suggests an effective reduction of tunneling probability of the M-structure design. Our result is of great help to understand the carrier immigration mechanism of the superlattice-based infrared detector.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Quantum work in the Bohmian framework
NASA Astrophysics Data System (ADS)
Sampaio, R.; Suomela, S.; Ala-Nissila, T.; Anders, J.; Philbin, T. G.
2018-01-01
At nonzero temperature classical systems exhibit statistical fluctuations of thermodynamic quantities arising from the variation of the system's initial conditions and its interaction with the environment. The fluctuating work, for example, is characterized by the ensemble of system trajectories in phase space and, by including the probabilities for various trajectories to occur, a work distribution can be constructed. However, without phase-space trajectories, the task of constructing a work probability distribution in the quantum regime has proven elusive. Here we use quantum trajectories in phase space and define fluctuating work as power integrated along the trajectories, in complete analogy to classical statistical physics. The resulting work probability distribution is valid for any quantum evolution, including cases with coherences in the energy basis. We demonstrate the quantum work probability distribution and its properties with an exactly solvable example of a driven quantum harmonic oscillator. An important feature of the work distribution is its dependence on the initial statistical mixture of pure states, which is reflected in higher moments of the work. The proposed approach introduces a fundamentally different perspective on quantum thermodynamics, allowing full thermodynamic characterization of the dynamics of quantum systems, including the measurement process.
Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2015-12-01
The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.
Cytologic diagnosis: expression of probability by clinical pathologists.
Christopher, Mary M; Hotz, Christine S
2004-01-01
Clinical pathologists use descriptive terms or modifiers to express the probability or likelihood of a cytologic diagnosis. Words are imprecise in meaning, however, and may be used and interpreted differently by pathologists and clinicians. The goals of this study were to 1) assess the frequency of use of 18 modifiers, 2) determine the probability of a positive diagnosis implied by the modifiers, 3) identify preferred modifiers for different levels of probability, 4) ascertain the importance of factors that affect expression of diagnostic certainty, and 5) evaluate differences based on gender, employment, and experience. We surveyed 202 clinical pathologists who were board-certified by the American College of Veterinary Pathologists (Clinical Pathology). Surveys were distributed in October 2001 and returned by e-mail, fax, or surface mail over a 2-month period. Results were analyzed by parametric and nonparametric tests. Survey response rate was 47.5% (n = 96) and primarily included clinical pathologists at veterinary schools (n = 58) and diagnostic laboratories (n = 31). Eleven of 18 terms were used "often" or "sometimes" by >/= 50% of respondents. Broad variability was found in the probability assigned to each term, especially those with median values of 75 to 90%. Preferred modifiers for 7 numerical probabilities ranging from 0 to 100% included 68 unique terms; however, a set of 10 terms was used by >/= 50% of respondents. Cellularity and quality of the sample, experience of the pathologist, and implications of the diagnosis were the most important factors affecting the expression of probability. Because of wide discrepancy in the implied likelihood of a diagnosis using words, defined terminology and controlled vocabulary may be useful in improving communication and the quality of data in cytology reporting.
Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.
2017-07-17
The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Levine, A Joan; Phipps, Amanda I; Baron, John A; Buchanan, Daniel D; Ahnen, Dennis J; Cohen, Stacey A; Lindor, Noralane M; Newcomb, Polly A; Rosty, Christophe; Haile, Robert W; Laird, Peter W; Weisenberger, Daniel J
2016-01-01
The CpG island methylator phenotype (CIMP) is a major molecular pathway in colorectal cancer. Approximately 25% to 60% of CIMP tumors are microsatellite unstable (MSI-H) due to DNA hypermethylation of the MLH1 gene promoter. Our aim was to determine if the distributions of clinicopathologic factors in CIMP-positive tumors with MLH1 DNA methylation differed from those in CIMP-positive tumors without DNA methylation of MLH1. We assessed the associations between age, sex, tumor-site, MSI status BRAF and KRAS mutations, and family colorectal cancer history with MLH1 methylation status in a large population-based sample of CIMP-positive colorectal cancers defined by a 5-marker panel using unconditional logistic regression to assess the odds of MLH1 methylation by study variables. Subjects with CIMP-positive tumors without MLH1 methylation were significantly younger, more likely to be male, and more likely to have distal colon or rectal primaries and the MSI-L phenotype. CIMP-positive MLH1-unmethylated tumors were significantly less likely than CIMP-positive MLH1-methylated tumors to harbor a BRAF V600E mutation and significantly more likely to harbor a KRAS mutation. MLH1 methylation was associated with significantly better overall survival (HR, 0.50; 95% confidence interval, 0.31-0.82). These data suggest that MLH1 methylation in CIMP-positive tumors is not a completely random event and implies that there are environmental or genetic determinants that modify the probability that MLH1 will become methylated during CIMP pathogenesis. MLH1 DNA methylation status should be taken into account in etiologic studies. ©2015 American Association for Cancer Research.
Levine, A. Joan; Phipps, Amanda I.; Baron, John A.; Buchanan, Daniel D.; Ahnen, Dennis J.; Cohen, Stacey A.; Lindor, Noralane M.; Newcomb, Polly A.; Rosty, Christophe; Haile, Robert W.; Laird, Peter W.; Weisenberger, Daniel J.
2015-01-01
Background The CpG Island Methylator Phenotype (CIMP) is a major molecular pathway in colorectal cancer (CRC). Approximately 25% to 60% of CIMP tumors are microsatellite unstable (MSI-H) due to DNA hypermethylation of the MLH1 gene promoter. Our aim was to determine if the distributions of clinicopathologic factors in CIMP-positive tumors with MLH1 DNA methylation differed from those in CIMP-positive tumors without DNA methylation of MLH1. Methods We assessed the associations between age, sex, tumor-site, MSI status BRAF and KRAS mutations and family CRC history with MLH1 methylation status in a large population-based sample of CIMP-positive CRCs defined by a 5-marker panel using unconditional logistic regression to assess the odds of MLH1 methylation by study variables. Results Subjects with CIMP-positive tumors without MLH1 methylation were significantly younger, more likely to be male, more likely to have distal colon or rectal primaries and the MSI-L phenotype. CIMP-positive MLH1-unmethylated tumors were significantly less likely than CIMP-positive MLH1-methylated tumors to harbor a BRAF V600E mutation and significantly more likely to harbor a KRAS mutation. MLH1 methylation was associated with significantly better overall survival (HR=0.50; 95% Confidence Interval (0.31, 0.82)). Conclusions These data suggest that MLH1 methylation in CIMP-positive tumors is not a completely random event and implies that there are environmental or genetic determinants that modify the probability that MLH1 will become methylated during CIMP pathogenesis. Impact MLH1 DNA methylation status should be taken into account in etiologic studies. PMID:26512054
NASA Astrophysics Data System (ADS)
Cajiao Vélez, F.; Kamiński, J. Z.; Krajewska, K.
2018-04-01
High-energy photoionization driven by short and circularly-polarized laser pulses is studied in the framework of the relativistic strong-field approximation. The saddle-point analysis of the integrals defining the probability amplitude is used to determine the general properties of the probability distributions. Additionally, an approximate solution to the saddle-point equation is derived. This leads to the concept of the three-dimensional spiral of life in momentum space, around which the ionization probability distribution is maximum. We demonstrate that such spiral is also obtained from a classical treatment.
Designing occupancy studies when false-positive detections occur
Clement, Matthew
2016-01-01
1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.
Schmidt, Benedikt R
2003-08-01
The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.
Bayesian data analysis tools for atomic physics
NASA Astrophysics Data System (ADS)
Trassinelli, Martino
2017-10-01
We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested_fit to calculate the different probability distributions and other related quantities. Nested_fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.
NASA Astrophysics Data System (ADS)
Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.
2017-04-01
We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.
Characterising RNA secondary structure space using information entropy
2013-01-01
Comparative methods for RNA secondary structure prediction use evolutionary information from RNA alignments to increase prediction accuracy. The model is often described in terms of stochastic context-free grammars (SCFGs), which generate a probability distribution over secondary structures. It is, however, unclear how this probability distribution changes as a function of the input alignment. As prediction programs typically only return a single secondary structure, better characterisation of the underlying probability space of RNA secondary structures is of great interest. In this work, we show how to efficiently compute the information entropy of the probability distribution over RNA secondary structures produced for RNA alignments by a phylo-SCFG, and implement it for the PPfold model. We also discuss interpretations and applications of this quantity, including how it can clarify reasons for low prediction reliability scores. PPfold and its source code are available from http://birc.au.dk/software/ppfold/. PMID:23368905
Nielsen, Bjørn G; Jensen, Morten Ø; Bohr, Henrik G
2003-01-01
The structure of enkephalin, a small neuropeptide with five amino acids, has been simulated on computers using molecular dynamics. Such simulations exhibit a few stable conformations, which also have been identified experimentally. The simulations provide the possibility to perform cluster analysis in the space defined by potentially pharmacophoric measures such as dihedral angles, side-chain orientation, etc. By analyzing the statistics of the resulting clusters, the probability distribution of the side-chain conformations may be determined. These probabilities allow us to predict the selectivity of [Leu]enkephalin and [Met]enkephalin to the known mu- and delta-type opiate receptors to which they bind as agonists. Other plausible consequences of these probability distributions are discussed in relation to the way in which they may influence the dynamics of the synapse. Copyright 2003 Wiley Periodicals, Inc. Biopolymers (Pept Sci) 71: 577-592, 2003
Exact probability distribution function for the volatility of cumulative production
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Klümper, Andreas
2018-04-01
In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.
Cronin, Paul; Dwamena, Ben A
2018-05-01
This study aimed to calculate the multiple-level likelihood ratios (LRs) and posttest probabilities for a positive, indeterminate, or negative test result for multidetector computed tomography pulmonary angiography (MDCTPA) ± computed tomography venography (CTV) and magnetic resonance pulmonary angiography (MRPA) ± magnetic resonance venography (MRV) for each clinical probability level (two-, three-, and four-level) for the nine most commonly used clinical prediction rules (CPRs) (Wells, Geneva, Miniati, and Charlotte). The study design is a review of observational studies with critical review of multiple cohort studies. The settings are acute care, emergency room care, and ambulatory care (inpatients and outpatients). Data were used to estimate pulmonary embolism (PE) pretest probability for each of the most commonly used CPRs at each probability level. Multiple-level LRs (positive, indeterminate, negative test) were generated and used to calculate posttest probabilities for MDCTPA, MDCTPA + CTV, MRPA, and MRPA + MRV from sensitivity and specificity results from Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED) II and PIOPED III for each clinical probability level for each CPR. Nomograms were also created. The LRs for a positive test result were higher for MRPA compared to MDCTPA without venography (76 vs 20) and with venography (42 vs 18). LRs for a negative test result were lower for MDCTPA compared to MRPA without venography (0.18 vs 0.22) and with venography (0.12 vs 0.15). In the three-level Wells score, the pretest clinical probability of PE for a low, moderate, and high clinical probability score is 5.7, 23, and 49. The posttest probability for an initially low clinical probability PE for a positive, indeterminate, and negative test result, respectively, for MDCTPA is 54, 5 and 1; for MDCTPA + CTV is 52, 2, and 0.7; for MRPA is 82, 6, and 1; and for MRPA + MRV is 72, 3, and 1; for an initially moderate clinical probability PE for MDCTPA is 86, 22, and 5; for MDCTPA + CTV is 85, 10, and 4; for MRPA is 96, 25, and 6; and for MRPA + MRV is 93, 14, and 4; and for an initially high clinical probability of PE for MDCTPA is 95, 47, and 15; for MDCTPA + CTV is 95, 27, and 10; for MRPA is 99, 52, and 17; and for MRPA + MRV is 98, 34, and 13. For a positive test result, LRs were considerably higher for MRPA compared to MDCTPA. However, both a positive MRPA and MDCTPA have LRs >10 and therefore can confirm the presence of PE. Performing venography reduced the LR for a positive and negative test for both MDCTPA and MRPA. The nomograms give posttest probabilities for a positive, indeterminate, or negative test result for MDCTPA and MRPA (with and without venography) for each clinical probability level for each of the CPR. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Wang, S Q; Zhang, H Y; Li, Z L
2016-10-01
Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
A one-dimensional statistical mechanics model for nucleosome positioning on genomic DNA.
Tesoro, S; Ali, I; Morozov, A N; Sulaiman, N; Marenduzzo, D
2016-02-12
The first level of folding of DNA in eukaryotes is provided by the so-called '10 nm chromatin fibre', where DNA wraps around histone proteins (∼10 nm in size) to form nucleosomes, which go on to create a zig-zagging bead-on-a-string structure. In this work we present a one-dimensional statistical mechanics model to study nucleosome positioning within one such 10 nm fibre. We focus on the case of genomic sheep DNA, and we start from effective potentials valid at infinite dilution and determined from high-resolution in vitro salt dialysis experiments. We study positioning within a polynucleosome chain, and compare the results for genomic DNA to that obtained in the simplest case of homogeneous DNA, where the problem can be mapped to a Tonks gas. First, we consider the simple, analytically solvable, case where nucleosomes are assumed to be point-like. Then, we perform numerical simulations to gauge the effect of their finite size on the nucleosomal distribution probabilities. Finally we compare nucleosome distributions and simulated nuclease digestion patterns for the two cases (homogeneous and sheep DNA), thereby providing testable predictions of the effect of sequence on experimentally observable quantities in experiments on polynucleosome chromatin fibres reconstituted in vitro.
The Detection of Signals in Impulsive Noise.
1983-06-01
ASSI FICATION/ DOWN GRADING SCHEOUL1E * I1S. DISTRIBUTION STATEMENT (of th0i0 Rhport) Approved for Public Release; Distribucion Unlimited * 17...has a symmetric distribution, sgn(x i) will be -1 with probability 1/2 and +1 with probability 1/2. Considering the sum of observations as 0 binomial
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Steady-state distributions of probability fluxes on complex networks
NASA Astrophysics Data System (ADS)
Chełminiak, Przemysław; Kurzyński, Michał
2017-02-01
We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Oil spill contamination probability in the southeastern Levantine basin.
Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam
2015-02-15
Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.
Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I
2007-02-01
We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity scales is found for the product alpha(tau)h(tau)*. These results indicate that compact globular proteins are consistent with a thermodynamic system governed by hydrophobic-like energy functions, with reduced distances from the geometrical center, reflecting atomic burials, and provide a conceptual framework for the eventual prediction from sequence of a few parameters from which whole atomic probability distributions and potentials of mean force can be reconstructed. Copyright 2006 Wiley-Liss, Inc.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
Kwon, Jennie H; Reske, Kimberly A; Hink, Tiffany; Burnham, C A; Dubberke, Erik R
2017-02-01
The objective of this study was to evaluate the clinical characteristics and outcomes of hospitalized patients tested for Clostridium difficile and determine the correlation between pretest probability for C. difficile infection (CDI) and assay results. Patients with testing ordered for C. difficile were enrolled and assigned a high, medium, or low pretest probability of CDI based on clinical evaluation, laboratory, and imaging results. Stool was tested for C. difficile by toxin enzyme immunoassay (EIA) and toxigenic culture (TC). Chi-square analyses and the log rank test were utilized. Among the 111 patients enrolled, stool samples from nine were TC positive and four were EIA positive. Sixty-one (55%) patients had clinically significant diarrhea, 19 (17%) patients did not, and clinically significant diarrhea could not be determined for 31 (28%) patients. Seventy-two (65%) patients were assessed as having a low pretest probability of having CDI, 34 (31%) as having a medium probability, and 5 (5%) as having a high probability. None of the patients with low pretest probabilities had a positive EIA, but four were TC positive. None of the seven patients with a positive TC but a negative index EIA developed CDI within 30 days after the index test or died within 90 days after the index toxin EIA date. Pretest probability for CDI should be considered prior to ordering C. difficile testing and must be taken into account when interpreting test results. CDI is a clinical diagnosis supported by laboratory data, and the detection of toxigenic C. difficile in stool does not necessarily confirm the diagnosis of CDI. Copyright © 2017 American Society for Microbiology.
Pons-Duran, Clara; González, Raquel; Quintó, Llorenç; Munguambe, Khatia; Tallada, Joan; Naniche, Denise; Sacoor, Charfudin; Sicuri, Elisa
2016-12-01
To analyse the association between socio-economic status (SES) and HIV in Manhiça, a district of Southern Mozambique with one of the highest HIV prevalences in the world. Data were gathered from two cross-sectional surveys performed in 2010 and 2012 among 1511 adults and from the household census of the district's population. Fractional polynomial logit models were used to analyse the association between HIV and SES, controlling for age and sex and taking into account the nonlinearity of covariates. The inequality of the distribution of HIV infection with regard to SES was computed through a concentration index. Fourth and fifth wealth quintiles, the least poor, were associated with a reduced probability of HIV infection compared to the first quintile (OR = 0.595, P-value = 0.009 and OR = 0.474, P-value < 0.001, respectively). Probability of HIV infection peaked at 36 years and then fell, and was always higher for women regardless of age and SES. HIV infection was unequally distributed across the SES strata. Despite the high HIV prevalence across the entire population of Manhiça, the poorest are at greatest risk of being HIV infected. While women have a higher probability of being HIV positive than men, both sexes showed the same infection reduction at higher levels of SES. HIV interventions in the area should particularly focus on the poorest and on women without neglecting anyone else, as the HIV risk is high for everyone. © 2016 John Wiley & Sons Ltd.
Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock
2013-01-01
Background Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light traps to sample specimens from the Culicoides obsoletus species complex on a 14 hectare field during 16 nights in 2009. Findings The large number of traps and catch nights enabled us to simulate a series of samples consisting of different numbers of traps (1-15) on each night. We also varied the number of catch nights when simulating the sampling, and sampled with increasing minimum distances between traps. We used resampling to generate a distribution of different mean and median abundance in each sample. Finally, we used the hypergeometric distribution to estimate the probability of falsely detecting absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. Conclusions Despite spatial clustering in vector abundance, we found no effect of increasing the distance between traps. We found that 18 traps were generally required to reach 90% probability of a true positive catch when sampling just one night. But when sampling over two nights the same probability level was obtained with just three traps per night. The results are useful for the design of vector monitoring programmes on fields with grazing animals. PMID:23705770
Properties of CGM-Absorbing Galaxies
NASA Astrophysics Data System (ADS)
Hamill, Colin; Conway, Matthew; Apala, Elizabeth; Scott, Jennifer
2018-01-01
We extend the results of a study of the sightlines of 45 low-redshift quasars (0.06 < z < 0.85) observed by HST/COS that lie within the Sloan Digital Sky Survey. We have used photometric data from the SDSS DR12, along with the known absorption characteristics of the intergalactic medium and circumgalactic medium, to identify the most probable galaxy matches to absorbers in the spectroscopic dataset. Here, we use photometric data and measured galaxy parameters from SDSS DR12 to examine the distributions of galaxy properties such as virial radius, morphology, and position angle among those that match to absorbers within a specific range of impact parameters. We compare those distributions to galaxies within the same impact parameter range that are not matched to any absorber in the HST/COS spectrum in order to investigate global properties of the circumgalactic medium.
The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.
Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik
2014-11-11
Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Laboratory Characterization and Modeling of a Near-Infrared Enhanced Photomultiplier Tube
NASA Technical Reports Server (NTRS)
Biswas, A.; Farr, W. H.
2003-01-01
The photon-starved channel for optical communications from deep space requires the development of detector technology that can achieve photon-counting sensitivities with high bandwidth. In this article, a near-infrared enhanced photomultiplier tube (PMT) with a quantum e.ciency of 0.08 at a 1.06- m wavelength is characterized in the laboratory. A Polya distribution model is used to compute the probability distribution function of the emitted secondary photoelectrons from the PMT. The model is compared with measured pulse-height distributions with reasonable agreement. The model accounts for realistic device parameters, such as the individual dynode stage gains and a shape parameter that is representative of the spatial uniformity of response across the photocathode and dynodes. Bit-error rate (BER) measurements also are presented for 4- and 8-pulse-position modulation (PPM) modulation schemes with data rates of 20 to 30 Mb/s. A BER of 10-2 is obtained for a mean of 8 detected photons.
Aguirre-Salado, Alejandro Ivan; Vaquera-Huerta, Humberto; Aguirre-Salado, Carlos Arturo; Reyes-Mora, Silvia; Olvera-Cervantes, Ana Delia; Lancho-Romero, Guillermo Arturo; Soubervielle-Montalvo, Carlos
2017-07-06
We implemented a spatial model for analysing PM 10 maxima across the Mexico City metropolitan area during the period 1995-2016. We assumed that these maxima follow a non-identical generalized extreme value (GEV) distribution and modeled the trend by introducing multivariate smoothing spline functions into the probability GEV distribution. A flexible, three-stage hierarchical Bayesian approach was developed to analyse the distribution of the PM 10 maxima in space and time. We evaluated the statistical model's performance by using a simulation study. The results showed strong evidence of a positive correlation between the PM 10 maxima and the longitude and latitude. The relationship between time and the PM 10 maxima was negative, indicating a decreasing trend over time. Finally, a high risk of PM 10 maxima presenting levels above 1000 μ g/m 3 (return period: 25 yr) was observed in the northwestern region of the study area.
Aguirre-Salado, Alejandro Ivan; Vaquera-Huerta, Humberto; Aguirre-Salado, Carlos Arturo; Reyes-Mora, Silvia; Olvera-Cervantes, Ana Delia; Lancho-Romero, Guillermo Arturo; Soubervielle-Montalvo, Carlos
2017-01-01
We implemented a spatial model for analysing PM10 maxima across the Mexico City metropolitan area during the period 1995–2016. We assumed that these maxima follow a non-identical generalized extreme value (GEV) distribution and modeled the trend by introducing multivariate smoothing spline functions into the probability GEV distribution. A flexible, three-stage hierarchical Bayesian approach was developed to analyse the distribution of the PM10 maxima in space and time. We evaluated the statistical model’s performance by using a simulation study. The results showed strong evidence of a positive correlation between the PM10 maxima and the longitude and latitude. The relationship between time and the PM10 maxima was negative, indicating a decreasing trend over time. Finally, a high risk of PM10 maxima presenting levels above 1000 μg/m3 (return period: 25 yr) was observed in the northwestern region of the study area. PMID:28684720
Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach
NASA Technical Reports Server (NTRS)
Mata, Carlos T.; Rakov, V. A.
2008-01-01
There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.
Estimation of distribution overlap of urn models.
Hampton, Jerrad; Lladser, Manuel E
2012-01-01
A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.
A Search Model for Imperfectly Detected Targets
NASA Technical Reports Server (NTRS)
Ahumada, Albert
2012-01-01
Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
A Lyme borreliosis diagnosis probability score - no relation with antibiotic treatment response.
Briciu, Violeta T; Flonta, Mirela; Leucuţa, Daniel; Cârstina, Dumitru; Ţăţulescu, Doina F; Lupşe, Mihaela
2017-05-01
(1) To describe epidemiological and clinical data of patients that present with the suspicion of Lyme borreliosis (LB); (2) to evaluate a previous published score that classifies patients on the probability of having LB, following-up patients' clinical outcome after antibiotherapy. Inclusion criteria: patients with clinical manifestations compatible with LB and Borrelia (B.) burgdorferi positive serology, hospitalized in a Romanian hospital between January 2011 and October 2012. erythema migrans (EM) or suspicion of Lyme neuroborreliosis (LNB) with lumbar puncture performed for diagnosis. A questionnaire was completed for each patient regarding associated diseases, tick bites or EM history and clinical signs/symptoms at admission, end of treatment and 3 months later. Two-tier testing (TTT) used an ELISA followed by a Western Blot kit. The patients were classified in groups, using the LB probability score and were evaluated in a multidisciplinary team. Antibiotherapy followed guidelines' recommendations. Sixty-four patients were included, presenting diverse associated comorbidities. Fifty-seven patients presented positive TTT, seven presenting either ELISA or Western Blot test positive. No differences in outcome were found between the groups of patients classified as very probable, probable and little probable LB. Instead, a better post-treatment outcome was described in patients with positive TTT. The patients investigated for the suspicion of LB present diverse clinical manifestations and comorbidities that complicate differential diagnosis. The LB diagnosis probability score used in our patients did not correlate with the antibiotic treatment response, suggesting that the probability score does not bring any benefit in diagnosis.
Theoretical cratering rates on Ida, Mathilde, Eros and Gaspra
NASA Astrophysics Data System (ADS)
Jeffers, S. V.; Asher, D. J.; Bailey, M. E.
2002-11-01
We investigate the main influences on crater size distributions, by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY (Chambers 1999) numerical integrator. The use of an efficient, Öpik-type, collision code enables the calculation of a velocity histogram and the probability of impact. This when combined with a crater scaling law and an impactor size distribution, through a Monte Carlo method, results in a crater size distribution. The resulting crater probability distributions are in good agreement with observed crater distributions on these asteroids.
Velocity distributions among colliding asteroids
NASA Technical Reports Server (NTRS)
Bottke, William F., Jr.; Nolan, Michael C.; Greenberg, Richard; Kolvoord, Robert A.
1994-01-01
The probability distribution for impact velocities between two given asteroids is wide, non-Gaussian, and often contains spikes according to our new method of analysis in which each possible orbital geometry for collision is weighted according to its probability. An average value would give a good representation only if the distribution were smooth and narrow. Therefore, the complete velocity distribution we obtain for various asteroid populations differs significantly from published histograms of average velocities. For all pairs among the 682 asteroids in the main-belt with D greater than 50 km, we find that our computed velocity distribution is much wider than previously computed histograms of average velocities. In this case, the most probable impact velocity is approximately 4.4 km/sec, compared with the mean impact velocity of 5.3 km/sec. For cases of a single asteroid (e.g., Gaspra or Ida) relative to an impacting population, the distribution we find yields lower velocities than previously reported by others. The width of these velocity distributions implies that mean impact velocities must be used with caution when calculating asteroid collisional lifetimes or crater-size distributions. Since the most probable impact velocities are lower than the mean, disruption events may occur less frequently than previously estimated. However, this disruption rate may be balanced somewhat by an apparent increase in the frequency of high-velocity impacts between asteroids. These results have implications for issues such as asteroidal disruption rates, the amount/type of impact ejecta available for meteoritical delivery to the Earth, and the geology and evolution of specific asteroids like Gaspra.
Distribution of leached radioactive material in the Legin Group Area, San Miguel County, Colorado
Rogers, Allen S.
1950-01-01
Radioactivity anomalies, which are small in magnitude, and probably are not caused by extensions of known uranium-vanadium ore bodies, were detected during the gamma-ray logging of diamond-drill holes in the Legin group of claims, southwest San Miguel County, Colo. The positions of these anomalies are at the top surfaces of mudstone strata within, and at the base of, the ore-bearing sandstone of the Salt Wash member of the Morrison formation. The distribution of these anomalies suggests that ground water has leached radioactive material from the ore bodies and has carried it down dip and laterally along the top surfaces of underlying impermeable mudstone strata for distance as great as 300 feet. The anomalies are probably caused by radon and its daughter elements. Preliminary tests indicate that radon in quantities up to 10-7 curies per liter may be present in ground water flowing along sandstone-mudstone contacts under carnotite ore bodies. In comparison, the radium content of the same water is less than 10-10 curies per liter. Further substantiation of the relationship between ore bodies, the movement of water, and the radon-caused anomalies may greatly increase the scope of gamma-ray logs of drill holes as an aid to prospecting.
The Probability Distribution for a Biased Spinner
ERIC Educational Resources Information Center
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
NASA Astrophysics Data System (ADS)
Gao, Haixia; Li, Ting; Xiao, Changming
2016-05-01
When a simple system is in its nonequilibrium state, it will shift to its equilibrium state. Obviously, in this process, there are a series of nonequilibrium states. With the assistance of Bayesian statistics and hyperensemble, a probable probability distribution of these nonequilibrium states can be determined by maximizing the hyperensemble entropy. It is known that the largest probability is the equilibrium state, and the far a nonequilibrium state is away from the equilibrium one, the smaller the probability will be, and the same conclusion can also be obtained in the multi-state space. Furthermore, if the probability stands for the relative time the corresponding nonequilibrium state can stay, then the velocity of a nonequilibrium state returning back to its equilibrium can also be determined through the reciprocal of the derivative of this probability. It tells us that the far away the state from the equilibrium is, the faster the returning velocity will be; if the system is near to its equilibrium state, the velocity will tend to be smaller and smaller, and finally tends to 0 when it gets the equilibrium state.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
Rapidly assessing the probability of exceptionally high natural hazard losses
NASA Astrophysics Data System (ADS)
Gollini, Isabella; Rougier, Jonathan
2014-05-01
One of the objectives in catastrophe modeling is to assess the probability distribution of losses for a specified period, such as a year. From the point of view of an insurance company, the whole of the loss distribution is interesting, and valuable in determining insurance premiums. But the shape of the righthand tail is critical, because it impinges on the solvency of the company. A simple measure of the risk of insolvency is the probability that the annual loss will exceed the company's current operating capital. Imposing an upper limit on this probability is one of the objectives of the EU Solvency II directive. If a probabilistic model is supplied for the loss process, then this tail probability can be computed, either directly, or by simulation. This can be a lengthy calculation for complex losses. Given the inevitably subjective nature of quantifying loss distributions, computational resources might be better used in a sensitivity analysis. This requires either a quick approximation to the tail probability or an upper bound on the probability, ideally a tight one. We present several different bounds, all of which can be computed nearly instantly from a very general event loss table. We provide a numerical illustration, and discuss the conditions under which the bound is tight. Although we consider the perspective of insurance and reinsurance companies, exactly the same issues concern the risk manager, who is typically very sensitive to large losses.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Effect of weak measurement on entanglement distribution over noisy channels.
Wang, Xin-Wen; Yu, Sixia; Zhang, Deng-Yu; Oh, C H
2016-03-03
Being able to implement effective entanglement distribution in noisy environments is a key step towards practical quantum communication, and long-term efforts have been made on the development of it. Recently, it has been found that the null-result weak measurement (NRWM) can be used to enhance probabilistically the entanglement of a single copy of amplitude-damped entangled state. This paper investigates remote distributions of bipartite and multipartite entangled states in the amplitudedamping environment by combining NRWMs and entanglement distillation protocols (EDPs). We show that the NRWM has no positive effect on the distribution of bipartite maximally entangled states and multipartite Greenberger-Horne-Zeilinger states, although it is able to increase the amount of entanglement of each source state (noisy entangled state) of EDPs with a certain probability. However, we find that the NRWM would contribute to remote distributions of multipartite W states. We demonstrate that the NRWM can not only reduce the fidelity thresholds for distillability of decohered W states, but also raise the distillation efficiencies of W states. Our results suggest a new idea for quantifying the ability of a local filtering operation in protecting entanglement from decoherence.
Effect of weak measurement on entanglement distribution over noisy channels
Wang, Xin-Wen; Yu, Sixia; Zhang, Deng-Yu; Oh, C. H.
2016-01-01
Being able to implement effective entanglement distribution in noisy environments is a key step towards practical quantum communication, and long-term efforts have been made on the development of it. Recently, it has been found that the null-result weak measurement (NRWM) can be used to enhance probabilistically the entanglement of a single copy of amplitude-damped entangled state. This paper investigates remote distributions of bipartite and multipartite entangled states in the amplitudedamping environment by combining NRWMs and entanglement distillation protocols (EDPs). We show that the NRWM has no positive effect on the distribution of bipartite maximally entangled states and multipartite Greenberger-Horne-Zeilinger states, although it is able to increase the amount of entanglement of each source state (noisy entangled state) of EDPs with a certain probability. However, we find that the NRWM would contribute to remote distributions of multipartite W states. We demonstrate that the NRWM can not only reduce the fidelity thresholds for distillability of decohered W states, but also raise the distillation efficiencies of W states. Our results suggest a new idea for quantifying the ability of a local filtering operation in protecting entanglement from decoherence. PMID:26935775
Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.
A developmental basis for stochasticity in floral organ numbers
Kitazawa, Miho S.; Fujimoto, Koichi
2014-01-01
Stochasticity ubiquitously inevitably appears at all levels from molecular traits to multicellular, morphological traits. Intrinsic stochasticity in biochemical reactions underlies the typical intercellular distributions of chemical concentrations, e.g., morphogen gradients, which can give rise to stochastic morphogenesis. While the universal statistics and mechanisms underlying the stochasticity at the biochemical level have been widely analyzed, those at the morphological level have not. Such morphological stochasticity is found in foral organ numbers. Although the floral organ number is a hallmark of floral species, it can distribute stochastically even within an individual plant. The probability distribution of the floral organ number within a population is usually asymmetric, i.e., it is more likely to increase rather than decrease from the modal value, or vice versa. We combined field observations, statistical analysis, and mathematical modeling to study the developmental basis of the variation in floral organ numbers among 50 species mainly from Ranunculaceae and several other families from core eudicots. We compared six hypothetical mechanisms and found that a modified error function reproduced much of the asymmetric variation found in eudicot floral organ numbers. The error function is derived from mathematical modeling of floral organ positioning, and its parameters represent measurable distances in the floral bud morphologies. The model predicts two developmental sources of the organ-number distributions: stochastic shifts in the expression boundaries of homeotic genes and a semi-concentric (whorled-type) organ arrangement. Other models species- or organ-specifically reproduced different types of distributions that reflect different developmental processes. The organ-number variation could be an indicator of stochasticity in organ fate determination and organ positioning. PMID:25404932
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Probability distributions for Markov chain based quantum walks
NASA Astrophysics Data System (ADS)
Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.
2018-01-01
We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
NASA Technical Reports Server (NTRS)
Bollenbacher, Gary; Guptill, James D.
1999-01-01
This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.
Estimating alarm thresholds and the number of components in mixture distributions
NASA Astrophysics Data System (ADS)
Burr, Tom; Hamada, Michael S.
2012-09-01
Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.
NASA Astrophysics Data System (ADS)
Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.
2018-01-01
This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.
Rincon, Diego F; Hoy, Casey W; Cañas, Luis A
2015-04-01
Most predator-prey models extrapolate functional responses from small-scale experiments assuming spatially uniform within-plant predator-prey interactions. However, some predators focus their search in certain plant regions, and herbivores tend to select leaves to balance their nutrient uptake and exposure to plant defenses. Individual-based models that account for heterogeneous within-plant predator-prey interactions can be used to scale-up functional responses, but they would require the generation of explicit prey spatial distributions within-plant architecture models. The silverleaf whitefly, Bemisia tabaci biotype B (Gennadius) (Hemiptera: Aleyrodidae), is a significant pest of tomato crops worldwide that exhibits highly aggregated populations at several spatial scales, including within the plant. As part of an analytical framework to understand predator-silverleaf whitefly interactions, the objective of this research was to develop an algorithm to generate explicit spatial counts of silverleaf whitefly nymphs within tomato plants. The algorithm requires the plant size and the number of silverleaf whitefly individuals to distribute as inputs, and includes models that describe infestation probabilities per leaf nodal position and the aggregation pattern of the silverleaf whitefly within tomato plants and leaves. The output is a simulated number of silverleaf whitefly individuals for each leaf and leaflet on one or more plants. Parameter estimation was performed using nymph counts per leaflet censused from 30 artificially infested tomato plants. Validation revealed a substantial agreement between algorithm outputs and independent data that included the distribution of counts of both eggs and nymphs. This algorithm can be used in simulation models that explore the effect of local heterogeneity on whitefly-predator dynamics. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Probability distribution functions in turbulent convection
NASA Technical Reports Server (NTRS)
Balachandar, S.; Sirovich, L.
1991-01-01
Results of an extensive investigation of probability distribution functions (pdfs) for Rayleigh-Benard convection, in hard turbulence regime, are presented. It is shown that the pdfs exhibit a high degree of internal universality. In certain cases this universality is established within two Kolmogorov scales of a boundary. A discussion of the factors leading to the universality is presented.