Probability distributions of continuous measurement results for conditioned quantum evolution
NASA Astrophysics Data System (ADS)
Franquet, A.; Nazarov, Yuli V.
2017-02-01
We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
A hydroclimatological approach to predicting regional landslide probability using Landlab
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.
2018-02-01
We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
NASA Astrophysics Data System (ADS)
Shemer, L.; Sergeeva, A.
2009-12-01
The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
2017-03-27
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Quantum work in the Bohmian framework
NASA Astrophysics Data System (ADS)
Sampaio, R.; Suomela, S.; Ala-Nissila, T.; Anders, J.; Philbin, T. G.
2018-01-01
At nonzero temperature classical systems exhibit statistical fluctuations of thermodynamic quantities arising from the variation of the system's initial conditions and its interaction with the environment. The fluctuating work, for example, is characterized by the ensemble of system trajectories in phase space and, by including the probabilities for various trajectories to occur, a work distribution can be constructed. However, without phase-space trajectories, the task of constructing a work probability distribution in the quantum regime has proven elusive. Here we use quantum trajectories in phase space and define fluctuating work as power integrated along the trajectories, in complete analogy to classical statistical physics. The resulting work probability distribution is valid for any quantum evolution, including cases with coherences in the energy basis. We demonstrate the quantum work probability distribution and its properties with an exactly solvable example of a driven quantum harmonic oscillator. An important feature of the work distribution is its dependence on the initial statistical mixture of pure states, which is reflected in higher moments of the work. The proposed approach introduces a fundamentally different perspective on quantum thermodynamics, allowing full thermodynamic characterization of the dynamics of quantum systems, including the measurement process.
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
Statistics of work performed on a forced quantum oscillator.
Talkner, Peter; Burada, P Sekhar; Hänggi, Peter
2008-07-01
Various aspects of the statistics of work performed by an external classical force on a quantum mechanical system are elucidated for a driven harmonic oscillator. In this special case two parameters are introduced that are sufficient to completely characterize the force protocol. Explicit results for the characteristic function of work and the corresponding probability distribution are provided and discussed for three different types of initial states of the oscillator: microcanonical, canonical, and coherent states. Depending on the choice of the initial state the probability distributions of the performed work may greatly differ. This result in particular also holds true for identical force protocols. General fluctuation and work theorems holding for microcanonical and canonical initial states are confirmed.
Shape of growth-rate distribution determines the type of Non-Gibrat’s Property
NASA Astrophysics Data System (ADS)
Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki
2011-11-01
In this study, the authors examine exhaustive business data on Japanese firms, which cover nearly all companies in the mid- and large-scale ranges in terms of firm size, to reach several key findings on profits/sales distribution and business growth trends. Here, profits denote net profits. First, detailed balance is observed not only in profits data but also in sales data. Furthermore, the growth-rate distribution of sales has wider tails than the linear growth-rate distribution of profits in log-log scale. On the one hand, in the mid-scale range of profits, the probability of positive growth decreases and the probability of negative growth increases symmetrically as the initial value increases. This is called Non-Gibrat’s First Property. On the other hand, in the mid-scale range of sales, the probability of positive growth decreases as the initial value increases, while the probability of negative growth hardly changes. This is called Non-Gibrat’s Second Property. Under detailed balance, Non-Gibrat’s First and Second Properties are analytically derived from the linear and quadratic growth-rate distributions in log-log scale, respectively. In both cases, the log-normal distribution is inferred from Non-Gibrat’s Properties and detailed balance. These analytic results are verified by empirical data. Consequently, this clarifies the notion that the difference in shapes between growth-rate distributions of sales and profits is closely related to the difference between the two Non-Gibrat’s Properties in the mid-scale range.
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2005-01-01
A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.
Distributed Tracking in Distributed Sensor Networks
1988-05-26
Glocal Track 6-17 6-12: Case II: Initial Glocal Track 6-18 6-13: Local Tracking Results with Multiple Model Approach 6-19 6-14: Model Probability History...3480.0- 2290.0e iee. onee -5800 -4600.8 -3400.8 -2208.8 -1886 X (Mi) Figure 6-11: Case 1: Initial Glocal Track 6-17 460. 420. 38 . 3488.9 1st 3498.9
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Redner, S.; Vazquez, F.
2007-02-01
We study a stochastic process that mimics single-game elimination tournaments. In our model, the outcome of each match is stochastic: the weaker player wins with upset probability q<=1/2, and the stronger player wins with probability 1-q. The loser is eliminated. Extremal statistics of the initial distribution of player strengths governs the tournament outcome. For a uniform initial distribution of strengths, the rank of the winner, x*, decays algebraically with the number of players, N, as x*~N-β. Different decay exponents are found analytically for sequential dynamics, βseq=1-2q, and parallel dynamics, \\beta_par=1+\\frac{\\ln (1-q)}{\\ln 2} . The distribution of player strengths becomes self-similar in the long time limit with an algebraic tail. Our theory successfully describes statistics of the US college basketball national championship tournament.
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Timescales of isotropic and anisotropic cluster collapse
NASA Astrophysics Data System (ADS)
Bartelmann, M.; Ehlers, J.; Schneider, P.
1993-12-01
From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.
Estimate of Probability of Crack Detection from Service Difficulty Report Data.
DOT National Transportation Integrated Search
1995-09-01
The initiation and growth of cracks in a fuselage lap joint were simulated. Stochastic distribution of crack initiation and rivet interference were included. The simulation also contained a simplified crack growth. Nominal crack growth behavior of la...
Estimate of probability of crack detection from service difficulty report data
DOT National Transportation Integrated Search
1994-09-01
The initiation and growth of cracks in a fuselage lap joint were simulated. Stochastic distribution of crack initiation and rivet interference were included. The simulation also contained a simplified crack growth. Nominal crack growth behavior of la...
Long-time behavior of material-surface curvature in isotropic turbulence
NASA Technical Reports Server (NTRS)
Girimaji, S. S.
1992-01-01
The behavior at large times of the curvature of material elements in turbulence is investigated using Lagrangian velocity-gradient time series obtained from direct numerical simulations of isotropic turbulence. The main objectives are: to study the asymptotic behavior of the pdf curvature as a function of initial curvature and shape; and to establish whether the curvature of an initially plane material element goes to a stationary probability distribution. The evidence available in the literature about the asymptotic curvature-pdf of initially flat surfaces is ambiguous, and the conjecture is that it is quasi-stationary. In this work several material-element ensembles of different initial curvatures and shapes are studied. It is found that, at long times the moments of the logarithm of curvature are independent of the initial pdf of curvature. This, it is argued, supports the view that the curvature attains a stationary distribution at long times. It is also shown that, irrespective of initial shape or curvature, the shape of any material element at long times is cylindrical with a high probability.
Option volatility and the acceleration Lagrangian
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang
2014-01-01
This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.
A Dual Power Law Distribution for the Stellar Initial Mass Function
NASA Astrophysics Data System (ADS)
Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett
2018-05-01
We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Oil spill contamination probability in the southeastern Levantine basin.
Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam
2015-02-15
Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model
NASA Astrophysics Data System (ADS)
Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.
2018-04-01
The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.
NASA Astrophysics Data System (ADS)
Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao
2018-06-01
This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
Aggregate and Individual Replication Probability within an Explicit Model of the Research Process
ERIC Educational Resources Information Center
Miller, Jeff; Schwarz, Wolf
2011-01-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
50 CFR 648.21 - Procedures for determining initial annual amounts.
Code of Federal Regulations, 2011 CFR
2011-10-01
... maximum probability of overfishing as informed by the OFL distribution will be 35 percent for stocks with... specifications established pursuant to this section may be adjusted by the Regional Administrator, in... the reasons for such an action and providing a 30-day public comment period. (f) Distribution of...
NASA Astrophysics Data System (ADS)
Ovchinnikov, Igor V.; Schwartz, Robert N.; Wang, Kang L.
2016-03-01
The concept of deterministic dynamical chaos has a long history and is well established by now. Nevertheless, its field theoretic essence and its stochastic generalization have been revealed only very recently. Within the newly found supersymmetric theory of stochastics (STS), all stochastic differential equations (SDEs) possess topological or de Rahm supersymmetry and stochastic chaos is the phenomenon of its spontaneous breakdown. Even though the STS is free of approximations and thus is technically solid, it is still missing a firm interpretational basis in order to be physically sound. Here, we make a few important steps toward the construction of the interpretational foundation for the STS. In particular, we discuss that one way to understand why the ground states of chaotic SDEs are conditional (not total) probability distributions, is that some of the variables have infinite memory of initial conditions and thus are not “thermalized”, i.e., cannot be described by the initial-conditions-independent probability distributions. As a result, the definitive assumption of physical statistics that the ground state is a steady-state total probability distribution is not valid for chaotic SDEs.
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
Distribution of chirality in the quantum walk: Markov process and entanglement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanelli, Alejandro
The asymptotic behavior of the quantum walk on the line is investigated, focusing on the probability distribution of chirality independently of position. It is shown analytically that this distribution has a longtime limit that is stationary and depends on the initial conditions. This result is unexpected in the context of the unitary evolution of the quantum walk as it is usually linked to a Markovian process. The asymptotic value of the entanglement between the coin and the position is determined by the chirality distribution. For given asymptotic values of both the entanglement and the chirality distribution, it is possible tomore » find the corresponding initial conditions within a particular class of spatially extended Gaussian distributions.« less
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
Activated recombinative desorption: A potential component in mechanisms of spacecraft glow
NASA Technical Reports Server (NTRS)
Cross, J. B.
1985-01-01
The concept of activated recombination of atomic species on surfaces can explain the production of vibrationally and translationally excited desorbed molecular species. Equilibrium statistical mechanics predicts that the molecular quantum state distributions of desorbing molecules is a function of surface temperature only when the adsorption probability is unity and independent of initial collision conditions. In most cases, the adsorption probability is dependent upon initial conditions such as collision energy or internal quantum state distribution of impinging molecules. From detailed balance, such dynamical behavior is reflected in the internal quantum state distribution of the desorbing molecule. This concept, activated recombinative desorption, may offer a common thread in proposed mechanisms of spacecraft glow. Using molecular beam techniques and equipment available at Los Alamos, which includes a high translational energy 0-atom beam source, mass spectrometric detection of desorbed species, chemiluminescence/laser induced fluorescence detection of electronic and vibrationally excited reaction products, and Auger detection of surface adsorbed reaction products, a fundamental study of the gas surface chemistry underlying the glow process is proposed.
Universality in survivor distributions: Characterizing the winners of competitive dynamics
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2015-11-01
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.
Modeling highway travel time distribution with conditional probability models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less
Aggregate and individual replication probability within an explicit model of the research process.
Miller, Jeff; Schwarz, Wolf
2011-09-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.
Open quantum random walk in terms of quantum Bernoulli noise
NASA Astrophysics Data System (ADS)
Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling
2018-03-01
In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.
Simulation of Stochastic Processes by Coupled ODE-PDE
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A document discusses the emergence of randomness in solutions of coupled, fully deterministic ODE-PDE (ordinary differential equations-partial differential equations) due to failure of the Lipschitz condition as a new phenomenon. It is possible to exploit the special properties of ordinary differential equations (represented by an arbitrarily chosen, dynamical system) coupled with the corresponding Liouville equations (used to describe the evolution of initial uncertainties in terms of joint probability distribution) in order to simulate stochastic processes with the proscribed probability distributions. The important advantage of the proposed approach is that the simulation does not require a random-number generator.
Most recent common ancestor probability distributions in gene genealogies under selection.
Slade, P F
2000-12-01
A computational study is made of the conditional probability distribution for the allelic type of the most recent common ancestor in genealogies of samples of n genes drawn from a population under selection, given the initial sample configuration. Comparisons with the corresponding unconditional cases are presented. Such unconditional distributions differ from samples drawn from the unique stationary distribution of population allelic frequencies, known as Wright's formula, and are quantified. Biallelic haploid and diploid models are considered. A simplified structure for the ancestral selection graph of S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237) is enhanced further, reducing the effective branching rate in the graph. This improves efficiency of such a nonneutral analogue of the coalescent for use with computational likelihood-inference techniques.
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuevas, F.A.; Curilef, S., E-mail: scurilef@ucn.cl; Plastino, A.R., E-mail: arplastino@ugr.es
The spread of a wave-packet (or its deformation) is a very important topic in quantum mechanics. Understanding this phenomenon is relevant in connection with the study of diverse physical systems. In this paper we apply various 'spreading measures' to characterize the evolution of an initially localized wave-packet in a tight-binding lattice, with special emphasis on information-theoretical measures. We investigate the behavior of both the probability distribution associated with the wave packet and the concomitant probability current. Complexity measures based upon Renyi entropies appear to be particularly good descriptors of the details of the delocalization process. - Highlights: > Spread ofmore » highly localized wave-packet in the tight-binding lattice. > Entropic and information-theoretical characterization is used to understand the delocalization. > The behavior of both the probability distribution and the concomitant probability current is investigated. > Renyi entropies appear to be good descriptors of the details of the delocalization process.« less
Tugnoli, Alessandro; Gubinelli, Gianfilippo; Landucci, Gabriele; Cozzani, Valerio
2014-08-30
The evaluation of the initial direction and velocity of the fragments generated in the fragmentation of a vessel due to internal pressure is an important information in the assessment of damage caused by fragments, in particular within the quantitative risk assessment (QRA) of chemical and process plants. In the present study an approach is proposed to the identification and validation of probability density functions (pdfs) for the initial direction of the fragments. A detailed review of a large number of past accidents provided the background information for the validation procedure. A specific method was developed for the validation of the proposed pdfs. Validated pdfs were obtained for both the vertical and horizontal angles of projection and for the initial velocity of the fragments. Copyright © 2014 Elsevier B.V. All rights reserved.
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Jeffrey H. Gove; Mark J. Ducey; William B. Leak; Lianjun Zhang
2008-01-01
Stand structures from a combined density manipulation and even- to uneven-aged conversion experiment on the Bartlett Experimental Forest (New Hampshire, USA) were examined 25 years after initial treatment for rotated sigmoidal diameter distributions. A comparison was made on these stands between two probability density functions for fitting these residual structures:...
May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M
2018-03-13
Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.
Probability, geometry, and dynamics in the toss of a thick coin
NASA Astrophysics Data System (ADS)
Yong, Ee Hou; Mahadevan, L.
2011-12-01
When a thick cylindrical coin is tossed in the air and lands without bouncing on an inelastic substrate, it ends up on its face or its side. We account for the rigid body dynamics of spin and precession and calculate the probability distribution of heads, tails, and sides for a thick coin as a function of its dimensions and the distribution of its initial conditions. Our theory yields a simple expression for the aspect ratio of homogeneous coins with a prescribed frequency of heads or tails compared to sides, which we validate using data from the results of tossing coins of different aspect ratios.
Comparison of probability statistics for automated ship detection in SAR imagery
NASA Astrophysics Data System (ADS)
Henschel, Michael D.; Rey, Maria T.; Campbell, J. W. M.; Petrovic, D.
1998-12-01
This paper discuses the initial results of a recent operational trial of the Ocean Monitoring Workstation's (OMW) ship detection algorithm which is essentially a Constant False Alarm Rate filter applied to Synthetic Aperture Radar data. The choice of probability distribution and methodologies for calculating scene specific statistics are discussed in some detail. An empirical basis for the choice of probability distribution used is discussed. We compare the results using a l-look, k-distribution function with various parameter choices and methods of estimation. As a special case of sea clutter statistics the application of a (chi) 2-distribution is also discussed. Comparisons are made with reference to RADARSAT data collected during the Maritime Command Operation Training exercise conducted in Atlantic Canadian Waters in June 1998. Reference is also made to previously collected statistics. The OMW is a commercial software suite that provides modules for automated vessel detection, oil spill monitoring, and environmental monitoring. This work has been undertaken to fine tune the OMW algorithm's, with special emphasis on the false alarm rate of each algorithm.
Stochastic models for the Trojan Y-Chromosome eradication strategy of an invasive species.
Wang, Xueying; Walton, Jay R; Parshad, Rana D
2016-01-01
The Trojan Y-Chromosome (TYC) strategy, an autocidal genetic biocontrol method, has been proposed to eliminate invasive alien species. In this work, we develop a Markov jump process model for this strategy, and we verify that there is a positive probability for wild-type females going extinct within a finite time. Moreover, when sex-reversed Trojan females are introduced at a constant population size, we formulate a stochastic differential equation (SDE) model as an approximation to the proposed Markov jump process model. Using the SDE model, we investigate the probability distribution and expectation of the extinction time of wild-type females by solving Kolmogorov equations associated with these statistics. The results indicate how the probability distribution and expectation of the extinction time are shaped by the initial conditions and the model parameters.
Estimating rate constants from single ion channel currents when the initial distribution is known.
The, Yu-Kai; Fernandez, Jacqueline; Popa, M Oana; Lerche, Holger; Timmer, Jens
2005-06-01
Single ion channel currents can be analysed by hidden or aggregated Markov models. A classical result from Fredkin et al. (Proceedings of the Berkeley conference in honor of Jerzy Neyman and Jack Kiefer, vol I, pp 269-289, 1985) states that the maximum number of identifiable parameters is bounded by 2n(o)n(c), where n(o) and n(c) denote the number of open and closed states, respectively. We show that this bound can be overcome when the probabilities of the initial distribution are known and the data consist of several sweeps.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Migration of Dust Particles and Their Collisions with the Terrestrial Planets
NASA Technical Reports Server (NTRS)
Ipatov, S. I.; Mather, J. C.
2004-01-01
Our review of previously published papers on dust migration can be found in [1], where we also present different distributions of migrating dust particles. We considered a different set of initial orbits for the dust particles than those in the previous papers. Below we pay the main attention to the collisional probabilities of migrating dust particles with the planets based on a set of orbital elements during their evolution. Such probabilities were not calculated earlier.
Volume-weighted measure for eternal inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winitzki, Sergei
2008-08-15
I propose a new volume-weighted probability measure for cosmological 'multiverse' scenarios involving eternal inflation. The 'reheating-volume (RV) cutoff' calculates the distribution of observable quantities on a portion of the reheating hypersurface that is conditioned to be finite. The RV measure is gauge-invariant, does not suffer from the 'youngness paradox', and is independent of initial conditions at the beginning of inflation. In slow-roll inflationary models with a scalar inflaton, the RV-regulated probability distributions can be obtained by solving nonlinear diffusion equations. I discuss possible applications of the new measure to 'landscape' scenarios with bubble nucleation. As an illustration, I compute themore » predictions of the RV measure in a simple toy landscape.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korhonen, Marko; Lee, Eunghyun
2014-01-15
We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle'smore » position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.« less
The role of probabilities in physics.
Le Bellac, Michel
2012-09-01
Although modern physics was born in the XVIIth century as a fully deterministic theory in the form of Newtonian mechanics, the use of probabilistic arguments turned out later on to be unavoidable. Three main situations can be distinguished. (1) When the number of degrees of freedom is very large, on the order of Avogadro's number, a detailed dynamical description is not possible, and in fact not useful: we do not care about the velocity of a particular molecule in a gas, all we need is the probability distribution of the velocities. This statistical description introduced by Maxwell and Boltzmann allows us to recover equilibrium thermodynamics, gives a microscopic interpretation of entropy and underlies our understanding of irreversibility. (2) Even when the number of degrees of freedom is small (but larger than three) sensitivity to initial conditions of chaotic dynamics makes determinism irrelevant in practice, because we cannot control the initial conditions with infinite accuracy. Although die tossing is in principle predictable, the approach to chaotic dynamics in some limit implies that our ignorance of initial conditions is translated into a probabilistic description: each face comes up with probability 1/6. (3) As is well-known, quantum mechanics is incompatible with determinism. However, quantum probabilities differ in an essential way from the probabilities introduced previously: it has been shown from the work of John Bell that quantum probabilities are intrinsic and cannot be given an ignorance interpretation based on a hypothetical deeper level of description. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu
In the landscape perspective, our Universe begins with a quantum tunneling from an eternally-inflating parent vacuum, followed by a period of slow-roll inflation. We investigate the tunneling process and calculate the probability distribution for the initial conditions and for the number of e-folds of slow-roll inflation, modeling the landscape by a small-field one-dimensional random Gaussian potential. We find that such a landscape is fully consistent with observations, but the probability for future detection of spatial curvature is rather low, P ∼ 10{sup −3}.
Starting and Stopping Spontaneous Family Conflicts.
ERIC Educational Resources Information Center
Vuchinich, Samuel
1987-01-01
Examined how 52 nondistressed families managed spontaneous verbal conflicts during family dinners. Found conflict initiation to be evenly distributed across family roles. Extension of conflict was constrained by constant probability of a next conflict move occurring. Most conflicts ended with no resolution. Mothers were most active in closing…
We'll Meet Again: Revealing Distributional and Temporal Patterns of Social Contact
Pachur, Thorsten; Schooler, Lael J.; Stevens, Jeffrey R.
2014-01-01
What are the dynamics and regularities underlying social contact, and how can contact with the people in one's social network be predicted? In order to characterize distributional and temporal patterns underlying contact probability, we asked 40 participants to keep a diary of their social contacts for 100 consecutive days. Using a memory framework previously used to study environmental regularities, we predicted that the probability of future contact would follow in systematic ways from the frequency, recency, and spacing of previous contact. The distribution of contact probability across the members of a person's social network was highly skewed, following an exponential function. As predicted, it emerged that future contact scaled linearly with frequency of past contact, proportionally to a power function with recency of past contact, and differentially according to the spacing of past contact. These relations emerged across different contact media and irrespective of whether the participant initiated or received contact. We discuss how the identification of these regularities might inspire more realistic analyses of behavior in social networks (e.g., attitude formation, cooperation). PMID:24475073
Some New Approaches to Multivariate Probability Distributions.
1986-12-01
Krishnaiah (1977). The following example may serve as an illustration of this point. EXAMPLE 2. (Fre^*chet’s bivariate continuous distribution...the error in the theorem of "" Prakasa Rao (1974) and to Dr. P.R. Krishnaiah for his valuable comments on the initial draft, his monumental patience and...M. and Proschan, F. (1984). Nonparametric Concepts and Methods in Reliability, Handbook of Statistics, 4, 613-655, (eds. P.R. Krishnaiah and P.K
NASA Astrophysics Data System (ADS)
Leijala, U.; Bjorkqvist, J. V.; Pellikka, H.; Johansson, M. M.; Kahma, K. K.
2017-12-01
Predicting the behaviour of the joint effect of sea level and wind waves is of great significance due to the major impact of flooding events in densely populated coastal regions. As mean sea level rises, the effect of sea level variations accompanied by the waves will be even more harmful in the future. The main challenge when evaluating the effect of waves and sea level variations is that long time series of both variables rarely exist. Wave statistics are also highly location-dependent, thus requiring wave buoy measurements and/or high-resolution wave modelling. As an initial approximation of the joint effect, the variables may be treated as independent random variables, to achieve the probability distribution of their sum. We present results of a case study based on three probability distributions: 1) wave run-up constructed from individual wave buoy measurements, 2) short-term sea level variability based on tide gauge data, and 3) mean sea level projections based on up-to-date regional scenarios. The wave measurements were conducted during 2012-2014 on the coast of city of Helsinki located in the Gulf of Finland in the Baltic Sea. The short-term sea level distribution contains the last 30 years (1986-2015) of hourly data from Helsinki tide gauge, and the mean sea level projections are scenarios adjusted for the Gulf of Finland. Additionally, we present a sensitivity test based on six different theoretical wave height distributions representing different wave behaviour in relation to sea level variations. As these wave distributions are merged with one common sea level distribution, we can study how the different shapes of the wave height distribution affect the distribution of the sum, and which one of the components is dominating under different wave conditions. As an outcome of the method, we obtain a probability distribution of the maximum elevation of the continuous water mass, which enables a flexible tool for evaluating different risk levels in the current and future climate.
NASA Astrophysics Data System (ADS)
Kempa, Wojciech M.
2017-12-01
A finite-capacity queueing system with server breakdowns is investigated, in which successive exponentially distributed failure-free times are followed by repair periods. After the processing a customer may either rejoin the queue (feedback) with probability q, or definitely leave the system with probability 1 - q. The system of integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is build. The solution of the corresponding system written for Laplace transforms is found using the linear algebraic approach. The considered queueing system can be successfully used in modelling production lines with machine failures, in which the parameter q may be considered as a typical fraction of items demanding corrections. Morever, this queueing model can be applied in the analysis of real TCP/IP performance, where q stands for the fraction of packets requiring retransmission.
Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-05-01
Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, Albert M; et al.
Event-by-event fluctuations in the elliptic-flow coefficientmore » $$v_2$$ are studied in PbPb collisions at $$\\sqrt{s_{_\\text{NN}}} = 5.02$$ TeV using the CMS detector at the CERN LHC. Elliptic-flow probability distributions $${p}(v_2)$$ for charged particles with transverse momentum 0.3$$< p_\\mathrm{T} <$$3.0 GeV and pseudorapidity $$| \\eta | <$$ 1.0 are determined for different collision centrality classes. The moments of the $${p}(v_2)$$ distributions are used to calculate the $$v_{2}$$ coefficients based on cumulant orders 2, 4, 6, and 8. A rank ordering of the higher-order cumulant results and nonzero standardized skewness values obtained for the $${p}(v_2)$$ distributions indicate non-Gaussian initial-state fluctuation behavior. Bessel-Gaussian and elliptic power fits to the flow distributions are studied to characterize the initial-state spatial anisotropy.« less
Modeling pore corrosion in normally open gold- plated copper connectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien
2008-09-01
The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less
Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph
2014-07-01
distribution of the random walk. This process can also be applied to other models, incomplete graphs, or to multiple dimensions. An advantage of this...since any multiple of an eigenvector remains an eigenvector. Without any loss, let bk = 1. Now we can ascertain the explicit solution for bj when k < j...this bound is valid for all initial probability distributions. However, without detailed information about the eigenvectors, we cannot extract more
The Effect of General Statistical Fiber Misalignment on Predicted Damage Initiation in Composites
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2014-01-01
A micromechanical method is employed for the prediction of unidirectional composites in which the fiber orientation can possess various statistical misalignment distributions. The method relies on the probability-weighted averaging of the appropriate concentration tensor, which is established by the micromechanical procedure. This approach provides access to the local field quantities throughout the constituents, from which initiation of damage in the composite can be predicted. In contrast, a typical macromechanical procedure can determine the effective composite elastic properties in the presence of statistical fiber misalignment, but cannot provide the local fields. Fully random fiber distribution is presented as a special case using the proposed micromechanical method. Results are given that illustrate the effects of various amounts of fiber misalignment in terms of the standard deviations of in-plane and out-of-plane misalignment angles, where normal distributions have been employed. Damage initiation envelopes, local fields, effective moduli, and strengths are predicted for polymer and ceramic matrix composites with given normal distributions of misalignment angles, as well as fully random fiber orientation.
A Statistical Study of the Mass Distribution of Neutron Stars
NASA Astrophysics Data System (ADS)
Cheng, Zheng; Zhang, Cheng-Min; Zhao, Yong-Heng; Wang, De-Hua; Pan, Yuan-Yue; Lei, Ya-Juan
2014-07-01
By reviewing the methods of mass measurements of neutron stars in four different kinds of systems, i.e., the high-mass X-ray binaries (HMXBs), low-mass X-ray binaries (LMXBs), double neutron star systems (DNSs) and neutron star-white dwarf (NS-WD) binary systems, we have collected the orbital parameters of 40 systems. By using the boot-strap method and the Monte-Carlo method, we have rebuilt the likelihood probability curves of the measured masses of 46 neutron stars. The statistical analysis of the simulation results shows that the masses of neutron stars in the X-ray neutron star systems and those in the radio pulsar systems exhibit different distributions. Besides, the Bayes statistics of these four different kind systems yields the most-probable probability density distributions of these four kind systems to be (1.340 ± 0.230)M8, (1, 505 ± 0.125)M8,(1.335 ± 0.055)M8 and (1.495 ± 0.225)M8, respectively. It is noteworthy that the masses of neutron stars in the HMXB and DNS systems are smaller than those in the other two kind systems by approximately 0.16M8. This result is consistent with the theoretical model of the pulsar to be accelerated to the millisecond order of magnitude via accretion of approximately 0.2M8. If the HMXBs and LMXBs are respectively taken to be the precursors of the BNS and NS-WD systems, then the influence of the accretion effect on the masses of neutron stars in the HMXB systems should be exceedingly small. Their mass distributions should be very close to the initial one during the formation of neutron stars. As for the LMXB and NS-WD systems, they should have already under- gone the process of suffcient accretion, hence there arises rather large deviation from the initial mass distribution.
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac
1993-01-01
Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.
Hydrodynamic Flow Fluctuations in √sNN = 5:02 TeV PbPbCollisions
NASA Astrophysics Data System (ADS)
Castle, James R.
The collective, anisotropic expansion of the medium created in ultrarelativistic heavy-ion collisions, known as flow, is characterized through a Fourier expansion of the final-state azimuthal particle density. In the Fourier expansion, flow harmonic coefficients vn correspond to shape components in the final-state particle density, which are a consequence of similar spatial anisotropies in the initial-state transverse energy density of a collision. Flow harmonic fluctuations are studied for PbPb collisions at √sNN = 5.02 TeV using the CMS detector at the CERN LHC. Flow harmonic probability distributions p( vn) are obtained using particles with 0.3 < pT < 3.0 GeV/c and ∥eta∥ < 1.0 by removing finite-multiplicity resolution effects from the observed azimuthal particle density through an unfolding procedure. Cumulant elliptic flow harmonics (n = 2) are determined from the moments of the unfolded p(v2) distributions and used to construct observables in 5% wide centrality bins up to 60% that relate to the initial-state spatial anisotropy. Hydrodynamic models predict that fluctuations in the initial-state transverse energy density will lead to a non-Gaussian component in the elliptic flow probability distributions that manifests as a negative skewness. A statistically significant negative skewness is observed for all centrality bins as evidenced by a splitting between the higher-order cumulant elliptic flow harmonics. The unfolded p (v2) distributions are transformed assuming a linear relationship between the initial-state spatial anisotropy and final-state flow and are fitted with elliptic power law and Bessel Gaussian parametrizations to infer information on the nature of initial-state fluctuations. The elliptic power law parametrization is found to provide a more accurate description of the fluctuations than the Bessel-Gaussian parametrization. In addition, the event-shape engineering technique, where events are further divided into classes based on an observed ellipticity, is used to study fluctuation-driven differences in the initial-state spatial anisotropy for a given collision centrality that would otherwise be destroyed by event-averaging techniques. Correlations between the first and second moments of p( vn) distributions and event ellipticity are measured for harmonic orders n = 2 - 4 by coupling event-shape engineering to the unfolding technique.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn; Ide, Yusuke
In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coinmore » and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.« less
Second look at the spread of epidemics on networks
NASA Astrophysics Data System (ADS)
Kenah, Eben; Robins, James M.
2007-09-01
In an important paper, Newman [Phys. Rev. E66, 016128 (2002)] claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semidirected random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In the Appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.
NASA Technical Reports Server (NTRS)
Baily, N. A.; Steigerwalt, J. E.; Hilbert, J. W.
1972-01-01
The frequency distributions of event size in the deposition of energy over small pathlengths have been measured after penetration of 44.3 MeV protons through various thicknesses of tissue-equivalent material. Results show that particle energy straggling of an initially monoenergetic proton beam after passage through an absorber causes the frequency distributions of energy deposited in short pathlengths of low atomic number materials to remain broad. In all cases investigated, the ratio of the most probable to the average energy losses has been significantly less than unity.
NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.
Gravity and count probabilities in an expanding universe
NASA Technical Reports Server (NTRS)
Bouchet, Francois R.; Hernquist, Lars
1992-01-01
The time evolution of nonlinear clustering on large scales in cold dark matter, hot dark matter, and white noise models of the universe is investigated using N-body simulations performed with a tree code. Count probabilities in cubic cells are determined as functions of the cell size and the clustering state (redshift), and comparisons are made with various theoretical models. We isolate the features that appear to be the result of gravitational instability, those that depend on the initial conditions, and those that are likely a consequence of numerical limitations. More specifically, we study the development of skewness, kurtosis, and the fifth moment in relation to variance, the dependence of the void probability on time as well as on sparseness of sampling, and the overall shape of the count probability distribution. Implications of our results for theoretical and observational studies are discussed.
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
Michael, Andrew J.
2012-01-01
Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.
NASA Astrophysics Data System (ADS)
Carruba, V.; Roig, F.; Michtchenko, T. A.; Ferraz-Mello, S.; Nesvorný, D.
2007-04-01
Context: Nearly all members of the Vesta family cross the orbits of (4) Vesta, one of the most massive asteroids in the main belt, and some of them approach it closely. When mutual velocities during such close encounters are low, the trajectory of the small body can be gravitationally deflected, consequently changing its heliocentric orbital elements. While the effect of a single close encounter may be small, repeated close encounters may significantly change the proper element distribution of members of asteroid families. Aims: We develop a model of the long-term effect of close encounters with massive asteroids, so as to be able to predict how far former members of the Vesta family could have drifted away from the family. Methods: We first developed a new symplectic integrator that simulates both the effects of close encounters and the Yarkovsky effect. We analyzed the results of a simulation involving a fictitious Vesta family, and propagated the asteroid proper element distribution using the probability density function (pdf hereafter), i.e. the function that describes the probability of having an encounter that modifies a proper element x by Δx, for all the possible values of Δx. Given any asteroids' proper element distribution at time t, the distribution at time t+T may be predicted if the pdf is known (Bachelier 1900, Théorie de la spéculation; Hughes 1995, Random Walks and Random Environments, Vol. I). Results: We applied our new method to the problem of V-type asteroids outside the Vesta family (i.e., the 31 currently known asteroids in the inner asteroid belt that have the same spectral type of members as the Vesta family, but that are outside the limits of the dynamical family) and determined that at least ten objects have a significant diffusion probability over the minimum estimated age of the Vesta family of 1.2 Gyr (Carruba et al. 2005, A&A, 441, 819). These objects can therefore be explained in the framework of diffusion via repeated close encounters with (4) Vesta of asteroids originally closer to the parent body. Conclusions: We computed diffusion probabilities at the location of four of these asteroids for various initial conditions, parametrized by values of initial ejection velocity V_ej. Based on our results, we believe the Vesta family age is (1200 ± 700) Myr old, with an initial ejection velocity of (240 ± 60) m/s. Appendices are only available in electronic form at http://www.aanda.org
The propagator of stochastic electrodynamics
NASA Astrophysics Data System (ADS)
Cavalleri, G.
1981-01-01
The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.
Irreversible reactions and diffusive escape: Stationary properties
Krapivsky, Paul L.; Ben-Naim, Eli
2015-05-01
We study three basic diffusion-controlled reaction processes—annihilation, coalescence, and aggregation. We examine the evolution starting with the most natural inhomogeneous initial configuration where a half-line is uniformly filled by particles, while the complementary half-line is empty. We show that the total number of particles that infiltrate the initially empty half-line is finite and has a stationary distribution. We determine the evolution of the average density from which we derive the average total number N of particles in the initially empty half-line; e.g. for annihilationmore » $$\\langle N\\rangle = \\frac{3}{16}+\\frac{1}{4\\π}$$ . For the coalescence process, we devise a procedure that in principle allows one to compute P(N), the probability to find exactly N particles in the initially empty half-line; we complete the calculations in the first non-trivial case (N = 1). As a by-product we derive the distance distribution between the two leading particles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenyon, Scott J.; Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu
2012-03-15
We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Beltmore » objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.« less
The energetic ion signature of an O-type neutral line in the geomagnetic tail
NASA Technical Reports Server (NTRS)
Martin, R. F., Jr.; Johnson, D. F.; Speiser, T. W.
1991-01-01
An energetic ion signature is presented which has the potential for remote sensing of an O-type neutral line embedded in a current sheet. A source plasma with a tailward flowing Kappa distribution yields a strongly non-Kappa distribution after interacting with the neutral line: sharp jumps, or ridges, occur in the velocity space distribution function f(nu-perpendicular, nu-parallel) associated with both increases and decreases in f. The jumps occur when orbits are reversed in the x-direction: a reversal causing initially earthward particles (low probability in the source distribution) to be observed results in a decrease in f, while a reversal causing initially tailward particles to be observed produces an increase in f. The reversals, and hence the jumps, occur at approximately constant values of perpendicular velocity in both the positive nu parallel and negative nu parallel half planes. The results were obtained using single particle simulations in a fixed magnetic field model.
No-signaling quantum key distribution: solution by linear programming
NASA Astrophysics Data System (ADS)
Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan
2015-02-01
We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.
Nuclear Ensemble Approach with Importance Sampling.
Kossoski, Fábris; Barbatti, Mario
2018-06-12
We show that the importance sampling technique can effectively augment the range of problems where the nuclear ensemble approach can be applied. A sampling probability distribution function initially determines the collection of initial conditions for which calculations are performed, as usual. Then, results for a distinct target distribution are computed by introducing compensating importance sampling weights for each sampled point. This mapping between the two probability distributions can be performed whenever they are both explicitly constructed. Perhaps most notably, this procedure allows for the computation of temperature dependent observables. As a test case, we investigated the UV absorption spectra of phenol, which has been shown to have a marked temperature dependence. Application of the proposed technique to a range that covers 500 K provides results that converge to those obtained with conventional sampling. We further show that an overall improved rate of convergence is obtained when sampling is performed at intermediate temperatures. The comparison between calculated and the available measured cross sections is very satisfactory, as the main features of the spectra are correctly reproduced. As a second test case, one of Tully's classical models was revisited, and we show that the computation of dynamical observables also profits from the importance sampling technique. In summary, the strategy developed here can be employed to assess the role of temperature for any property calculated within the nuclear ensemble method, with the same computational cost as doing so for a single temperature.
Conservative Belief and Rationality
2012-10-03
mail: halpern@cs.cornell.edu, rafael@cs.cornell.edu October 3, 2012 Abstract Brandenburger and Dekel have shown that common belief of rationality (CBR...for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Games and Economic Behavior?13 14. ABSTRACT Brandenburger and Dekel have shown...themselves in a position to which they initially assigned probability 0. Tan and Werlang [1988] and Brandenburger and Dekel [1987] show that common
Application of Bayesian Reliability Concepts to Cruise Missile Electronic Components
1989-09-01
and contrast them with the more prevalent classical inference view. 3 II. literature Review Introduction This literature review will consider current ...events on the basis of whatever evidence is currently available. Then if additional evidence is subsequently obtained, the initial probabilities are...Chay contends there is no longer any need to approximate continuous prior distributions through discretization because current computer calculations
Probability and surprisal in auditory comprehension of morphologically complex words.
Balling, Laura Winther; Baayen, R Harald
2012-10-01
Two auditory lexical decision experiments document for morphologically complex words two points at which the probability of a target word given the evidence shifts dramatically. The first point is reached when morphologically unrelated competitors are no longer compatible with the evidence. Adapting terminology from Marslen-Wilson (1984), we refer to this as the word's initial uniqueness point (UP1). The second point is the complex uniqueness point (CUP) introduced by Balling and Baayen (2008), at which morphologically related competitors become incompatible with the input. Later initial as well as complex uniqueness points predict longer response latencies. We argue that the effects of these uniqueness points arise due to the large surprisal (Levy, 2008) carried by the phonemes at these uniqueness points, and provide independent evidence that how cumulative surprisal builds up in the course of the word co-determines response latencies. The presence of effects of surprisal, both at the initial uniqueness point of complex words, and cumulatively throughout the word, challenges the Shortlist B model of Norris and McQueen (2008), and suggests that a Bayesian approach to auditory comprehension requires complementation from information theory in order to do justice to the cognitive cost of updating probability distributions over lexical candidates. Copyright © 2012 Elsevier B.V. All rights reserved.
Theory of rotational transition in atom-diatom chemical reaction
NASA Astrophysics Data System (ADS)
Nakamura, Masato; Nakamura, Hiroki
1989-05-01
Rotational transition in atom-diatom chemical reaction is theoretically studied. A new approximate theory (which we call IOS-DW approximation) is proposed on the basis of the physical idea that rotational transition in reaction is induced by the following two different mechanisms: rotationally inelastic half collision in both initial and final arrangement channels, and coordinate transformation in the reaction zone. This theory gives a fairy compact expression for the state-to-state transition probability. Introducing the additional physically reasonable assumption that reaction (particle rearrangement) takes place in a spatially localized region, we have reduced this expression into a simpler analytical form which can explicitly give overall rotational state distribution in reaction. Numerical application was made to the H+H2 reaction and demonstrated its effectiveness for the simplicity. A further simplified most naive approximation, i.e., independent events approximation was also proposed and demonstrated to work well in the test calculation of H+H2. The overall rotational state distribution is expressed simply by a product sum of the transition probabilities for the three consecutive processes in reaction: inelastic transition in the initial half collision, transition due to particle rearrangement, and inelastic transition in the final half collision.
How likely are constituent quanta to initiate inflation?
Berezhiani, Lasha; Trodden, Mark
2015-08-06
In this study, we propose an intuitive framework for studying the problem of initial conditions in slow-roll inflation. In particular, we consider a universe at high, but sub-Planckian energy density and analyze the circumstances under which it is plausible for it to become dominated by inflated patches at late times, without appealing to the idea of self-reproduction. Our approach is based on defining a prior probability distribution for the constituent quanta of the pre-inflationary universe. To test the idea that inflation can begin under very generic circumstances, we make specific – yet quite general and well grounded – assumptions onmore » the prior distribution. As a result, we are led to the conclusion that the probability for a given region to ignite inflation at sub-Planckian densities is extremely small. Furthermore, if one chooses to use the enormous volume factor that inflation yields as an appropriate measure, we find that the regions of the universe which started inflating at densities below the self-reproductive threshold nevertheless occupy a negligible physical volume in the present universe as compared to those domains that have never inflated.« less
NASA Astrophysics Data System (ADS)
Rambalakos, Andreas
Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik
2015-10-01
We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.
NASA Astrophysics Data System (ADS)
Singh, Shailesh Kumar
2014-05-01
Streamflow forecasts are essential for making critical decision for optimal allocation of water supplies for various demands that include irrigation for agriculture, habitat for fisheries, hydropower production and flood warning. The major objective of this study is to explore the Ensemble Streamflow Prediction (ESP) based forecast in New Zealand catchments and to highlights the present capability of seasonal flow forecasting of National Institute of Water and Atmospheric Research (NIWA). In this study a probabilistic forecast framework for ESP is presented. The basic assumption in ESP is that future weather pattern were experienced historically. Hence, past forcing data can be used with current initial condition to generate an ensemble of prediction. Small differences in initial conditions can result in large difference in the forecast. The initial state of catchment can be obtained by continuously running the model till current time and use this initial state with past forcing data to generate ensemble of flow for future. The approach taken here is to run TopNet hydrological models with a range of past forcing data (precipitation, temperature etc.) with current initial conditions. The collection of runs is called the ensemble. ESP give probabilistic forecasts for flow. From ensemble members the probability distributions can be derived. The probability distributions capture part of the intrinsic uncertainty in weather or climate. An ensemble stream flow prediction which provide probabilistic hydrological forecast with lead time up to 3 months is presented for Rangitata, Ahuriri, and Hooker and Jollie rivers in South Island of New Zealand. ESP based seasonal forecast have better skill than climatology. This system can provide better over all information for holistic water resource management.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Statistical physics of medical diagnostics: Study of a probabilistic model.
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Statistical physics of medical diagnostics: Study of a probabilistic model
NASA Astrophysics Data System (ADS)
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Public opinion by a poll process: model study and Bayesian view
NASA Astrophysics Data System (ADS)
Lee, Hyun Keun; Kim, Yong Woon
2018-05-01
We study the formation of public opinion in a poll process where the current score is open to the public. The voters are assumed to vote probabilistically for or against their own preference considering the group opinion collected up to then in the score. The poll-score probability is found to follow the beta distribution in the large polls limit. We demonstrate that various poll results, even those contradictory to the population preference, are possible with non-zero probability density and that such deviations are readily triggered by initial bias. It is mentioned that our poll model can be understood in the Bayesian viewpoint.
Unstable density distribution associated with equatorial plasma bubble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.
2016-04-15
In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less
Green, M J; Browne, W J; Green, L E; Bradley, A J; Leach, K A; Breen, J E; Medley, G F
2009-10-01
The fundamental objective for health research is to determine whether changes should be made to clinical decisions. Decisions made by veterinary surgeons in the light of new research evidence are known to be influenced by their prior beliefs, especially their initial opinions about the plausibility of possible results. In this paper, clinical trial results for a bovine mastitis control plan were evaluated within a Bayesian context, to incorporate a community of prior distributions that represented a spectrum of clinical prior beliefs. The aim was to quantify the effect of veterinary surgeons' initial viewpoints on the interpretation of the trial results. A Bayesian analysis was conducted using Markov chain Monte Carlo procedures. Stochastic models included a financial cost attributed to a change in clinical mastitis following implementation of the control plan. Prior distributions were incorporated that covered a realistic range of possible clinical viewpoints, including scepticism, enthusiasm and uncertainty. Posterior distributions revealed important differences in the financial gain that clinicians with different starting viewpoints would anticipate from the mastitis control plan, given the actual research results. For example, a severe skeptic would ascribe a probability of 0.50 for a return of < 5 UK pounds per cow in an average herd that implemented the plan, whereas an enthusiast would ascribe this probability for a return of > 20 UK pounds per cow. Simulations using increased trial sizes indicated that if the original study was four times as large, an initial skeptic would be more convinced about the efficacy of the control plan but would still anticipate less financial return than an initial enthusiast would anticipate after the original study. In conclusion, it is possible to estimate how clinicians' prior beliefs influence their interpretation of research evidence. Further research on the extent to which different interpretations of evidence result in changes to clinical practice would be worthwhile.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Meng-Zheng; School of Physics and Electronic Information, Huaibei Normal University, Huaibei 235000; Ye, Liu, E-mail: yeliu@ahu.edu.cn
An efficient scheme is proposed to implement phase-covariant quantum cloning by using a superconducting transmon qubit coupled to a microwave cavity resonator in the strong dispersive limit of circuit quantum electrodynamics (QED). By solving the master equation numerically, we plot the Wigner function and Poisson distribution of the cavity mode after each operation in the cloning transformation sequence according to two logic circuits proposed. The visualizations of the quasi-probability distribution in phase-space for the cavity mode and the occupation probability distribution in the Fock basis enable us to penetrate the evolution process of cavity mode during the phase-covariant cloning (PCC)more » transformation. With the help of numerical simulation method, we find out that the present cloning machine is not the isotropic model because its output fidelity depends on the polar angle and the azimuthal angle of the initial input state on the Bloch sphere. The fidelity for the actual output clone of the present scheme is slightly smaller than one in the theoretical case. The simulation results are consistent with the theoretical ones. This further corroborates our scheme based on circuit QED can implement efficiently PCC transformation.« less
The geometry of proliferating dicot cells.
Korn, R W
2001-02-01
The distributions of cell size and cell cycle duration were studied in two-dimensional expanding plant tissues. Plastic imprints of the leaf epidermis of three dicot plants, jade (Crassula argentae), impatiens (Impatiens wallerana), and the common begonia (Begonia semperflorens) were made and cell outlines analysed. The average, standard deviation and coefficient of variance (CV = 100 x standard deviation/average) of cell size were determined with the CV of mother cells less than the CV for daughter cells and both are less than that for all cells. An equation was devised as a simple description of the probability distribution of sizes for all cells of a tissue. Cell cycle durations as measured in arbitrary time units were determined by reconstructing the initial and final sizes of cells and they collectively give the expected asymmetric bell-shaped probability distribution. Given the features of unequal cell division (an average of 11.6% difference in size of daughter cells) and the size variation of dividing cells, it appears that the range of cell size is more critically regulated than the size of a cell at any particular time.
Simple techniques for improving deep neural network outcomes on commodity hardware
NASA Astrophysics Data System (ADS)
Colina, Nicholas Christopher A.; Perez, Carlos E.; Paraan, Francis N. C.
2017-08-01
We benchmark improvements in the performance of deep neural networks (DNN) on the MNIST data test upon imple-menting two simple modifications to the algorithm that have little overhead computational cost. First is GPU parallelization on a commodity graphics card, and second is initializing the DNN with random orthogonal weight matrices prior to optimization. Eigenspectra analysis of the weight matrices reveal that the initially orthogonal matrices remain nearly orthogonal after training. The probability distributions from which these orthogonal matrices are drawn are also shown to significantly affect the performance of these deep neural networks.
Brown, K; Buchmann, A; Balmain, A
1990-01-01
A number of mouse skin tumors initiated by the carcinogens N-methyl-N'-nitro-N-nitrosoguanidine (MNNG), methylnitrosourea (MNU), 3-methylcholanthrene (MCA), and 7,12-dimethylbenz[a]anthracene (DMBA) have been shown to contain activated Ha-ras genes. In each case, the point mutations responsible for activation have been characterized. Results presented demonstrate the carcinogen-specific nature of these ras mutations. For each initiating agent, a distinct spectrum of mutations is observed. Most importantly, the distribution of ras gene mutations is found to differ between benign papillomas and carcinomas, suggesting that molecular events occurring at the time of initiation influence the probability with which papillomas progress to malignancy. This study provides molecular evidence in support of the existence of subsets of papillomas with differing progression frequencies. Thus, the alkylating agents MNNG and MNU induced exclusively G ---- A transitions at codon 12, with this mutation being found predominantly in papillomas. MCA initiation produced both codon 13 G ---- T and codon 61 A ---- T transversions in papillomas; only the G ---- T mutation, however, was found in carcinomas. These findings provide strong evidence that the mutational activation of Ha-ras occurs as a result of the initiation process and that the nature of the initiating event can affect the probability of progression to malignancy. Images PMID:2105486
Aspects of Geodesical Motion with Fisher-Rao Metric: Classical and Quantum
NASA Astrophysics Data System (ADS)
Ciaglia, Florio M.; Cosmo, Fabio Di; Felice, Domenico; Mancini, Stefano; Marmo, Giuseppe; Pérez-Pardo, Juan M.
The purpose of this paper is to exploit the geometric structure of quantum mechanics and of statistical manifolds to study the qualitative effect that the quantum properties have in the statistical description of a system. We show that the end points of geodesics in the classical setting coincide with the probability distributions that minimise Shannon’s entropy, i.e. with distributions of zero dispersion. In the quantum setting this happens only for particular initial conditions, which in turn correspond to classical submanifolds. This result can be interpreted as a geometric manifestation of the uncertainty principle.
Eby, Lisa A.; Helmy, Olga; Holsinger, Lisa M.; Young, Michael K.
2014-01-01
Many freshwater fish species are considered vulnerable to stream temperature warming associated with climate change because they are ectothermic, yet there are surprisingly few studies documenting changes in distributions. Streams and rivers in the U.S. Rocky Mountains have been warming for several decades. At the same time these systems have been experiencing an increase in the severity and frequency of wildfires, which often results in habitat changes including increased water temperatures. We resampled 74 sites across a Rocky Mountain watershed 17 to 20 years after initial samples to determine whether there were trends in bull trout occurrence associated with temperature, wildfire, or other habitat variables. We found that site abandonment probabilities (0.36) were significantly higher than colonization probabilities (0.13), which indicated a reduction in the number of occupied sites. Site abandonment probabilities were greater at low elevations with warm temperatures. Other covariates, such as the presence of wildfire, nonnative brook trout, proximity to areas with many adults, and various stream habitat descriptors, were not associated with changes in probability of occupancy. Higher abandonment probabilities at low elevation for bull trout provide initial evidence validating the predictions made by bioclimatic models that bull trout populations will retreat to higher, cooler thermal refuges as water temperatures increase. The geographic breadth of these declines across the region is unknown but the approach of revisiting historical sites using an occupancy framework provides a useful template for additional assessments. PMID:24897341
Eby, Lisa A; Helmy, Olga; Holsinger, Lisa M; Young, Michael K
2014-01-01
Many freshwater fish species are considered vulnerable to stream temperature warming associated with climate change because they are ectothermic, yet there are surprisingly few studies documenting changes in distributions. Streams and rivers in the U.S. Rocky Mountains have been warming for several decades. At the same time these systems have been experiencing an increase in the severity and frequency of wildfires, which often results in habitat changes including increased water temperatures. We resampled 74 sites across a Rocky Mountain watershed 17 to 20 years after initial samples to determine whether there were trends in bull trout occurrence associated with temperature, wildfire, or other habitat variables. We found that site abandonment probabilities (0.36) were significantly higher than colonization probabilities (0.13), which indicated a reduction in the number of occupied sites. Site abandonment probabilities were greater at low elevations with warm temperatures. Other covariates, such as the presence of wildfire, nonnative brook trout, proximity to areas with many adults, and various stream habitat descriptors, were not associated with changes in probability of occupancy. Higher abandonment probabilities at low elevation for bull trout provide initial evidence validating the predictions made by bioclimatic models that bull trout populations will retreat to higher, cooler thermal refuges as water temperatures increase. The geographic breadth of these declines across the region is unknown but the approach of revisiting historical sites using an occupancy framework provides a useful template for additional assessments.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Surgical Treatment of Laser Induced Eye Injuries
1992-08-21
for public release; distribution unlimited 13. ABSTRACT (MdamnrUm 200 words) The project was carried out iii response to the increasing incidence of... response of the eye to the ini:ial damage. Clearly, little can be done about initial damage after the injurious event. However, it is quite probable...hemorrhage has been addressed by de Juan and Machemer (1988). These authors note similar progression of hemorrhage to fibrotic tissue, although the
A Simple Probabilistic Combat Model
2016-06-13
This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at
Data normalization in biosurveillance: an information-theoretic approach.
Peter, William; Najmi, Amir H; Burkom, Howard
2007-10-11
An approach to identifying public health threats by characterizing syndromic surveillance data in terms of its surprisability is discussed. Surprisability in our model is measured by assigning a probability distribution to a time series, and then calculating its entropy, leading to a straightforward designation of an alert. Initial application of our method is to investigate the applicability of using suitably-normalized syndromic counts (i.e., proportions) to improve early event detection.
STAR FORMATION IN TURBULENT MOLECULAR CLOUDS WITH COLLIDING FLOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, Tomoaki; Dobashi, Kazuhito; Shimoikura, Tomomi, E-mail: matsu@hosei.ac.jp
2015-03-10
Using self-gravitational hydrodynamical numerical simulations, we investigated the evolution of high-density turbulent molecular clouds swept by a colliding flow. The interaction of shock waves due to turbulence produces networks of thin filamentary clouds with a sub-parsec width. The colliding flow accumulates the filamentary clouds into a sheet cloud and promotes active star formation for initially high-density clouds. Clouds with a colliding flow exhibit a finer filamentary network than clouds without a colliding flow. The probability distribution functions (PDFs) for the density and column density can be fitted by lognormal functions for clouds without colliding flow. When the initial turbulence ismore » weak, the column density PDF has a power-law wing at high column densities. The colliding flow considerably deforms the PDF, such that the PDF exhibits a double peak. The stellar mass distributions reproduced here are consistent with the classical initial mass function with a power-law index of –1.35 when the initial clouds have a high density. The distribution of stellar velocities agrees with the gas velocity distribution, which can be fitted by Gaussian functions for clouds without colliding flow. For clouds with colliding flow, the velocity dispersion of gas tends to be larger than the stellar velocity dispersion. The signatures of colliding flows and turbulence appear in channel maps reconstructed from the simulation data. Clouds without colliding flow exhibit a cloud-scale velocity shear due to the turbulence. In contrast, clouds with colliding flow show a prominent anti-correlated distribution of thin filaments between the different velocity channels, suggesting collisions between the filamentary clouds.« less
On the extinction probability in models of within-host infection: the role of latency and immunity.
Yan, Ada W C; Cao, Pengxing; McCaw, James M
2016-10-01
Not every exposure to virus establishes infection in the host; instead, the small amount of initial virus could become extinct due to stochastic events. Different diseases and routes of transmission have a different average number of exposures required to establish an infection. Furthermore, the host immune response and antiviral treatment affect not only the time course of the viral load provided infection occurs, but can prevent infection altogether by increasing the extinction probability. We show that the extinction probability when there is a time-dependent immune response depends on the chosen form of the model-specifically, on the presence or absence of a delay between infection of a cell and production of virus, and the distribution of latent and infectious periods of an infected cell. We hypothesise that experimentally measuring the extinction probability when the virus is introduced at different stages of the immune response, alongside the viral load which is usually measured, will improve parameter estimates and determine the most suitable mathematical form of the model.
Explicit Two-Phase Modeling of the Initiation of Saltation over Heterogeneous Sand Beds
NASA Astrophysics Data System (ADS)
Turney, F. A.; Kok, J. F.; Martin, R. L.; Burr, D. M.; Bridges, N.; Ortiz, C. P.; Smith, J. K.; Emery, J. P.; Van Lew, J. T.
2016-12-01
The initiation of aeolian sediment transport is key in understanding the geomorphology of arid landscapes and emission of mineral dust into the atmosphere. Despite its importance, the process of saltation initiation remains poorly understood, and current models are highly simplified. Previous models of the initiation of aeolian saltation have assumed the particle bed to be monodisperse and homogeneous in arrangement, ignoring the distribution of particle thresholds created by different bed geometries and particle sizes. In addition, mean wind speeds are often used in place of a turbulent wind field, ignoring the distribution of wind velocities at the particle level. Furthermore, the transition from static bed to steady state saltation is often modeled as resulting directly from fluid lifting, while in reality particles need to hop and roll along the surface before attaining enough height and momentum to initiate the cascade of particle splashes that characterizes saltation. We simulate the initiation of saltation with a coupled two-phase CFD-DEM model that overcomes the shortcomings of previous models by explicitly modeling particle-particle and particle-fluid interactions at the particle scale. We constrain our model against particle trajectories taken from high speed video of initiation at the Titan Wind Tunnel at NASA Ames. Results give us insight into the probability that saltation will be initiated, given stochastic variations in bed properties and wind velocity.
NASA Astrophysics Data System (ADS)
Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi
To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.
Decaying two-dimensional turbulence in a circular container.
Schneider, Kai; Farge, Marie
2005-12-09
We present direct numerical simulations of two-dimensional decaying turbulence at initial Reynolds number 5 x 10(4) in a circular container with no-slip boundary conditions. Starting with random initial conditions the flow rapidly exhibits self-organization into coherent vortices. We study their formation and the role of the viscous boundary layer on the production and decay of integral quantities. The no-slip wall produces vortices which are injected into the bulk flow and tend to compensate the enstrophy dissipation. The self-organization of the flow is reflected by the transition of the initially Gaussian vorticity probability density function (PDF) towards a distribution with exponential tails. Because of the presence of coherent vortices the pressure PDF become strongly skewed with exponential tails for negative values.
Garriguet, Didier
2016-04-01
Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.
Exact combinatorial approach to finite coagulating systems
NASA Astrophysics Data System (ADS)
Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr
2018-02-01
This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.
Technical Report 1205: A Simple Probabilistic Combat Model
2016-07-08
This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a
NASA Astrophysics Data System (ADS)
Straus, D. M.
2007-12-01
The probability distribution (pdf) of errors is followed in identical twin studies using the COLA T63 AGCM, integrated with observed SST for 15 recent winters. 30 integrations per winter (for 15 winters) are available with initial errors that are extremely small. The evolution of the pdf is tested for multi-modality, and the results interpreted in terms of clusters / regimes found in: (a) the set of 15x30 integrations mentioned, and (b) a larger ensemble of 55x15 integrations made with the same GCM using the same SSTs. The mapping of pdf evolution and clusters is also carried out for each winter separately, using the clusters found in the 55-member ensemble for the same winter alone. This technique yields information on the change in regimes caused by different boundary forcing (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). Analysis of the growing errors in terms of baroclinic and barotropic components allows for interpretation of the corresponding instabilities.
Work statistics of charged noninteracting fermions in slowly changing magnetic fields.
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β^{-1} and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β(2). At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes. ©2011 American Physical Society
Work statistics of charged noninteracting fermions in slowly changing magnetic fields
NASA Astrophysics Data System (ADS)
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β-1 and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β2. At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes.
Breast cancer screening programmes: the development of a monitoring and evaluation system.
Day, N E; Williams, D R; Khaw, K T
1989-06-01
It is important that the introduction of breast screening is closely monitored. The anticipated effect on breast cancer mortality will take 10 years or more fully to emerge, and will only occur if a succession of more short-term end points are met. Data from the Swedish two-county randomised trial provide targets that should be achieved, following a logical progression of compliance with the initial invitation, prevalence and stage distribution at the prevalence screen, the rate of interval cancers after the initial screen, the pick-up rate and stage distribution at later screening tests, the rate of interval cancers after later tests, the absolute rate of advanced cancer and finally the breast cancer mortality rate. For evaluation purposes, historical data on stage at diagnosis is desirable; it is suggested that tumour size is probably the most relevant variable available in most cases.
de la Fuente, Jaime; Garrett, C Gaelyn; Ossoff, Robert; Vinson, Kim; Francis, David O; Gelbard, Alexander
2017-11-01
To examine the distribution of clinic and operative pathology in a tertiary care laryngology practice. Probability density and cumulative distribution analyses (Pareto analysis) was used to rank order laryngeal conditions seen in an outpatient tertiary care laryngology practice and those requiring surgical intervention during a 3-year period. Among 3783 new clinic consultations and 1380 operative procedures, voice disorders were the most common primary diagnostic category seen in clinic (n = 3223), followed by airway (n = 374) and swallowing (n = 186) disorders. Within the voice strata, the most common primary ICD-9 code used was dysphonia (41%), followed by unilateral vocal fold paralysis (UVFP) (9%) and cough (7%). Among new voice patients, 45% were found to have a structural abnormality. The most common surgical indications were laryngotracheal stenosis (37%), followed by recurrent respiratory papillomatosis (18%) and UVFP (17%). Nearly 55% of patients presenting to a tertiary referral laryngology practice did not have an identifiable structural abnormality in the larynx on direct or indirect examination. The distribution of ICD-9 codes requiring surgical intervention was disparate from that seen in clinic. Application of the Pareto principle may improve resource allocation in laryngology, but these initial results require confirmation across multiple institutions.
Survival Predictions of Ceramic Crowns Using Statistical Fracture Mechanics
Nasrin, S.; Katsube, N.; Seghi, R.R.; Rokhlin, S.I.
2017-01-01
This work establishes a survival probability methodology for interface-initiated fatigue failures of monolithic ceramic crowns under simulated masticatory loading. A complete 3-dimensional (3D) finite element analysis model of a minimally reduced molar crown was developed using commercially available hardware and software. Estimates of material surface flaw distributions and fatigue parameters for 3 reinforced glass-ceramics (fluormica [FM], leucite [LR], and lithium disilicate [LD]) and a dense sintered yttrium-stabilized zirconia (YZ) were obtained from the literature and incorporated into the model. Utilizing the proposed fracture mechanics–based model, crown survival probability as a function of loading cycles was obtained from simulations performed on the 4 ceramic materials utilizing identical crown geometries and loading conditions. The weaker ceramic materials (FM and LR) resulted in lower survival rates than the more recently developed higher-strength ceramic materials (LD and YZ). The simulated 10-y survival rate of crowns fabricated from YZ was only slightly better than those fabricated from LD. In addition, 2 of the model crown systems (FM and LD) were expanded to determine regional-dependent failure probabilities. This analysis predicted that the LD-based crowns were more likely to fail from fractures initiating from margin areas, whereas the FM-based crowns showed a slightly higher probability of failure from fractures initiating from the occlusal table below the contact areas. These 2 predicted fracture initiation locations have some agreement with reported fractographic analyses of failed crowns. In this model, we considered the maximum tensile stress tangential to the interfacial surface, as opposed to the more universally reported maximum principal stress, because it more directly impacts crack propagation. While the accuracy of these predictions needs to be experimentally verified, the model can provide a fundamental understanding of the importance that pre-existing flaws at the intaglio surface have on fatigue failures. PMID:28107637
Optimization of cell seeding in a 2D bio-scaffold system using computational models.
Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong
2017-05-01
The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ribosome flow model with positive feedback
Margaliot, Michael; Tuller, Tamir
2013-01-01
Eukaryotic mRNAs usually form a circular structure; thus, ribosomes that terminatae translation at the 3′ end can diffuse with increased probability to the 5′ end of the transcript, initiating another cycle of translation. This phenomenon describes ribosomal flow with positive feedback—an increase in the flow of ribosomes terminating translating the open reading frame increases the ribosomal initiation rate. The aim of this paper is to model and rigorously analyse translation with feedback. We suggest a modified version of the ribosome flow model, called the ribosome flow model with input and output. In this model, the input is the initiation rate and the output is the translation rate. We analyse this model after closing the loop with a positive linear feedback. We show that the closed-loop system admits a unique globally asymptotically stable equilibrium point. From a biophysical point of view, this means that there exists a unique steady state of ribosome distributions along the mRNA, and thus a unique steady-state translation rate. The solution from any initial distribution will converge to this steady state. The steady-state distribution demonstrates a decrease in ribosome density along the coding sequence. For the case of constant elongation rates, we obtain expressions relating the model parameters to the equilibrium point. These results may perhaps be used to re-engineer the biological system in order to obtain a desired translation rate. PMID:23720534
TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-00: New Methods to Ensure Target Coverage
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-07-01
The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.
Statistical Decoupling of a Lagrangian Fluid Parcel in Newtonian Cosmology
NASA Astrophysics Data System (ADS)
Wang, Xin; Szalay, Alex
2016-03-01
The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differential equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.
STATISTICAL DECOUPLING OF A LAGRANGIAN FLUID PARCEL IN NEWTONIAN COSMOLOGY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xin; Szalay, Alex, E-mail: xwang@cita.utoronto.ca
The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differentialmore » equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.« less
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
A brief introduction to probability.
Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio
2018-02-01
The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.
On the origin of heavy-tail statistics in equations of the Nonlinear Schrödinger type
NASA Astrophysics Data System (ADS)
Onorato, Miguel; Proment, Davide; El, Gennady; Randoux, Stephane; Suret, Pierre
2016-09-01
We study the formation of extreme events in incoherent systems described by the Nonlinear Schrödinger type of equations. We consider an exact identity that relates the evolution of the normalized fourth-order moment of the probability density function of the wave envelope to the rate of change of the width of the Fourier spectrum of the wave field. We show that, given an initial condition characterized by some distribution of the wave envelope, an increase of the spectral bandwidth in the focusing/defocusing regime leads to an increase/decrease of the probability of formation of rogue waves. Extensive numerical simulations in 1D+1 and 2D+1 are also performed to confirm the results.
On the role of dealing with quantum coherence in amplitude amplification
NASA Astrophysics Data System (ADS)
Rastegin, Alexey E.
2018-07-01
Amplitude amplification is one of primary tools in building algorithms for quantum computers. This technique generalizes key ideas of the Grover search algorithm. Potentially useful modifications are connected with changing phases in the rotation operations and replacing the intermediate Hadamard transform with arbitrary unitary one. In addition, arbitrary initial distribution of the amplitudes may be prepared. We examine trade-off relations between measures of quantum coherence and the success probability in amplitude amplification processes. As measures of coherence, the geometric coherence and the relative entropy of coherence are considered. In terms of the relative entropy of coherence, complementarity relations with the success probability seem to be the most expository. The general relations presented are illustrated within several model scenarios of amplitude amplification processes.
The global impact distribution of Near-Earth objects
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.
2016-02-01
Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.
Stochastic theory of fatigue corrosion
NASA Astrophysics Data System (ADS)
Hu, Haiyun
1999-10-01
A stochastic theory of corrosion has been constructed. The stochastic equations are described giving the transportation corrosion rate and fluctuation corrosion coefficient. In addition the pit diameter distribution function, the average pit diameter and the most probable pit diameter including other related empirical formula have been derived. In order to clarify the effect of stress range on the initiation and growth behaviour of pitting corrosion, round smooth specimen were tested under cyclic loading in 3.5% NaCl solution.
Aridification driven diversification of fan-throated lizards from the Indian subcontinent.
Deepak, V; Karanth, Praveen
2018-03-01
The establishment of monsoon climate and the consequent aridification has been one of the most important climate change episodes in the Indian subcontinent. However, little is known about how these events might have shaped the diversification patterns among the widely distributed taxa. Fan-throated lizards (FTL) (Genus: Sitana, Sarada) are widespread, diurnal and restricted to the semi-arid zones of the Indian subcontinent. We sampled FTL in 107 localities across its range. We used molecular species delimitation method and delineated 15 species including six putative species. Thirteen of them were distinguishable based on morphology but two sister species were indistinguishable and have minor overlaps in distribution. Five fossils were used to calibrate and date the phylogeny. Diversification of fan-throated lizards lineage started ~18 mya and higher lineage diversification was observed after 11 my. The initial diversification corresponds to the time when monsoon climate was established and the latter was a period of intensification of monsoon and initiation of aridification. Thirteen out of the fifteen FTL species delimited are from Peninsular India; this is probably due to the landscape heterogeneity in this region. The species poor sister genus Otocryptis is paraphyletic and probably represents relict lineages which are now confined to forested areas. Thus, the seasonality led changes in habitat, from forests to open habitats appear to have driven diversification of fan-throated lizards. Copyright © 2017 Elsevier Inc. All rights reserved.
Towards an initial mass function for giant planets
NASA Astrophysics Data System (ADS)
Carrera, Daniel; Davies, Melvyn B.; Johansen, Anders
2018-07-01
The distribution of exoplanet masses is not primordial. After the initial stage of planet formation, gravitational interactions between planets can lead to the physical collision of two planets, or the ejection of one or more planets from the system. When this occurs, the remaining planets are typically left in more eccentric orbits. In this report we demonstrate how the present-day eccentricities of the observed exoplanet population can be used to reconstruct the initial mass function of exoplanets before the onset of dynamical instability. We developed a Bayesian framework that combines data from N-body simulations with present-day observations to compute a probability distribution for the mass of the planets that were ejected or collided in the past. Integrating across the exoplanet population, one can estimate the initial mass function of exoplanets. We find that the ejected planets are primarily sub-Saturn-type planets. While the present-day distribution appears to be bimodal, with peaks around ˜1MJ and ˜20M⊕, this bimodality does not seem to be primordial. Instead, planets around ˜60M⊕ appear to be preferentially removed by dynamical instabilities. Attempts to reproduce exoplanet populations using population synthesis codes should be mindful of the fact that the present population may have been depleted of sub-Saturn-mass planets. Future observations may reveal that young giant planets have a more continuous size distribution with lower eccentricities and more sub-Saturn-type planets. Lastly, there is a need for additional data and for more research on how the system architecture and multiplicity might alter our results.
Toward an initial mass function for giant planets
NASA Astrophysics Data System (ADS)
Carrera, Daniel; Davies, Melvyn B.; Johansen, Anders
2018-05-01
The distribution of exoplanet masses is not primordial. After the initial stage of planet formation, gravitational interactions between planets can lead to the physical collision of two planets, or the ejection of one or more planets from the system. When this occurs, the remaining planets are typically left in more eccentric orbits. In this report we demonstrate how the present-day eccentricities of the observed exoplanet population can be used to reconstruct the initial mass function of exoplanets before the onset of dynamical instability. We developed a Bayesian framework that combines data from N-body simulations with present-day observations to compute a probability distribution for the mass of the planets that were ejected or collided in the past. Integrating across the exoplanet population, one can estimate the initial mass function of exoplanets. We find that the ejected planets are primarily sub-Saturn type planets. While the present-day distribution appears to be bimodal, with peaks around ˜1MJ and ˜20M⊕, this bimodality does not seem to be primordial. Instead, planets around ˜60M⊕ appear to be preferentially removed by dynamical instabilities. Attempts to reproduce exoplanet populations using population synthesis codes should be mindful of the fact that the present population may have been been depleted of sub-Saturn-mass planets. Future observations may reveal that young giant planets have a more continuous size distribution with lower eccentricities and more sub-Saturn type planets. Lastly, there is a need for additional data and for more research on how the system architecture and multiplicity might alter our results.
NASA Astrophysics Data System (ADS)
Coelho, Flavio Codeço; Carvalho, Luiz Max De
2015-12-01
Quantifying the attack ratio of disease is key to epidemiological inference and public health planning. For multi-serotype pathogens, however, different levels of serotype-specific immunity make it difficult to assess the population at risk. In this paper we propose a Bayesian method for estimation of the attack ratio of an epidemic and the initial fraction of susceptibles using aggregated incidence data. We derive the probability distribution of the effective reproductive number, Rt, and use MCMC to obtain posterior distributions of the parameters of a single-strain SIR transmission model with time-varying force of infection. Our method is showcased in a data set consisting of 18 years of dengue incidence in the city of Rio de Janeiro, Brazil. We demonstrate that it is possible to learn about the initial fraction of susceptibles and the attack ratio even in the absence of serotype specific data. On the other hand, the information provided by this approach is limited, stressing the need for detailed serological surveys to characterise the distribution of serotype-specific immunity in the population.
Misra, Anil; Spencer, Paulette; Marangos, Orestes; Wang, Yong; Katz, J. Lawrence
2005-01-01
A finite element (FE) model has been developed based upon the recently measured micro-scale morphological, chemical and mechanical properties of dentin–adhesive (d–a) interfaces using confocal Raman microspectroscopy and scanning acoustic microscopy (SAM). The results computed from this FE model indicated that the stress distributions and concentrations are affected by the micro-scale elastic properties of various phases composing the d–a interface. However, these computations were performed assuming isotropic material properties for the d–a interface. The d–a interface components, such as the peritubular and intertubular dentin, the partially demineralized dentin and the so-called ‘hybrid layer’ adhesive-collagen composite, are probably anisotropic. In this paper, the FE model is extended to account for the probable anisotropic properties of these d–a interface phases. A parametric study is performed to study the effect of anisotropy on the micromechanical stress distributions in the hybrid layer and the peritubular dentin phases of the d–a interface. It is found that the anisotropy of the phases affects the region and extent of stress concentration as well as the location of the maximum stress concentrations. Thus, the anisotropy of the phases could effect the probable location of failure initiation, whether in the peritubular region or in the hybrid layer. PMID:16849175
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
Deep Learning Role in Early Diagnosis of Prostate Cancer
Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman
2018-01-01
The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.
2012-04-01
Artificial procedures are used to initiate spontaneous rupture on faults with the linear slip-weakening (LSW) friction law. Probably the most frequent technique is the stress asperity. It is important to minimize effects of the artificial initialization on the phase of the spontaneous rupture propagation. The effects may strongly depend on the geometry and size of the asperity, spatial distribution of the stress in and around the asperity, and a maximum stress-overshoot value. A square initialization zone with the stress discontinuously falling down at the asperity border to the level of the initial stress has been frequently applied (e.g., in the SCEC verification exercise). Galis et al. (2010) and Bizzarri (2010) independently introduced the elliptical asperity with a smooth spatial stress distribution in and around the asperity. In both papers the width of smoothing/tapering zone was only ad-hoc defined. Numerical simulations indicate that the ADER-DG method can account for a discontinuous-stress initialization more accurately than the FE method. Considering the ADER-DG solution a reference we performed numerical simulations in order to define the width of the smoothing/tapering zone to be used in the FE and FD-FE hybrid methods for spontaneous rupture propagation. We considered different sizes of initialization zone, different shapes of the initialization zone (square, circle, ellipse), different spatial distributions of stress (smooth, discontinuous), and different stress-overshoot values to investigate conditions of the spontaneous rupture propagation. We compare our numerical results with the 2D and 3D estimates by Andrews (1976a,b), Day (1982), Campillo & Ionescu (1997), Favreau at al. (1999) and Uenishi & Rice (2003, 2004). Results of our study may help modelers to better setup the initialization zone in order to avoid, e.g., a too large initialization zone and reduce numerical artifacts.
Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt
2017-11-01
When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute.
Nonadditive entropies yield probability distributions with biases not warranted by the data.
Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A
2013-11-01
Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta
2017-02-15
Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less
Incorporating Skew into RMS Surface Roughness Probability Distribution
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
NASA Astrophysics Data System (ADS)
Dupoyet, B.; Fiebig, H. R.; Musgrove, D. P.
2010-01-01
We report on initial studies of a quantum field theory defined on a lattice with multi-ladder geometry and the dilation group as a local gauge symmetry. The model is relevant in the cross-disciplinary area of econophysics. A corresponding proposal by Ilinski aimed at gauge modeling in non-equilibrium pricing is implemented in a numerical simulation. We arrive at a probability distribution of relative gains which matches the high frequency historical data of the NASDAQ stock exchange index.
Air Asset to Mission Assignment for Dynamic High-Threat Environments in Real-Time
2015-03-01
39 Initial Distribution List 41 viii List of Figures Figure 2.1 Joint Air Tasking Cycle (JCS 2014). An iterative 120-hour cycle for planners within the...minutes of on- staion time, or “playtime”, with a total of two GBU -16 laser-guided bomb (LGB) and an Advanced Targeting Forward Looking Infrared (ATFLIR...proba- bility of survival against the SA-2 and SA-3 systems, respectively. A GBU -16 LGB has no standoff capability and 90%, 60%, and 70% probability of
Mori, J.; Abercrombie, R.E.
1997-01-01
Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.
Entanglement between atomic thermal states and coherent or squeezed photons in a damping cavity
NASA Astrophysics Data System (ADS)
Yadollahi, F.; Safaiee, R.; Golshan, M. M.
2018-02-01
In the present study, the standard Jaynes-Cummings model, in a lossy cavity, is employed to characterize the entanglement between atoms and photons when the former is initially in a thermal state (mixed ensemble) while the latter is described by either coherent or squeezed distributions. The whole system is thus assumed to be in equilibrium with a heat reservoir at a finite temperature T, and the measure of negativity is used to determine the time evolution of atom-photon entanglement. To this end, the master equation for the density matrix, in the secular approximation, is solved and a partial transposition of the result is made. The degree of atom-photon entanglement is then numerically computed, through the negativity, as a function of time and temperature. To justify the behavior of atom-photon entanglement, moreover, we employ the so obtained total density matrix to compute and analyze the time evolution of the initial photonic coherent or squeezed probability distributions and the squeezing parameters. On more practical points, our results demonstrate that as the initial photon mean number increases, the atom-photon entanglement decays at a faster pace for the coherent distribution compared to the squeezed one. Moreover, it is shown that the degree of atom-photon entanglement is much higher and more stable for the squeezed distribution than that for the coherent one. Consequently, we conclude that the time intervals during which the atom-photon entanglement is distillable is longer for the squeezed distribution. It is also illustrated that as the temperature increases the rate of approaching separability is faster for the coherent initial distribution. The novel point of the present report is the calculation of dynamical density matrix (containing all physical information) for the combined system of atom-photon in a lossy cavity, as well as the corresponding negativity, at a finite temperature.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Analysis of vector wind change with respect to time for Cape Kennedy, Florida
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1978-01-01
Multivariate analysis was used to determine the joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from 15 years of twice-daily rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, that the joint distribution of wind component change with respect to time is univariate normal, that the joint distribution of wind component changes is bivariate normal, and that the modulus of vector wind change is Rayleigh are tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from 1 to 5 hours, calculated from Jimsphere data, are presented. Extension of the theoretical prediction (based on rawinsonde data) of wind component change standard deviation to time periods of 1 to 5 hours falls (with a few exceptions) within the 95 percentile confidence band of the population estimate obtained from the Jimsphere sample data. The joint distributions of wind change components, conditional wind components, and 1 km vector wind shear change components are illustrated by probability ellipses at the 95 percentile level.
The dynamics of the multi-planet system orbiting Kepler-56
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Gongjie; Naoz, Smadar; Johnson, John Asher
2014-10-20
Kepler-56 is a multi-planet system containing two coplanar inner planets that are in orbits misaligned with respect to the spin axis of the host star, and an outer planet. Various mechanisms have been proposed to explain the broad distribution of spin-orbit angles among exoplanets, and these theories fall under two broad categories. The first is based on dynamical interactions in a multi-body system, while the other assumes that disk migration is the driving mechanism in planetary configuration and that the star (or disk) is titled with respect to the planetary plane. Here we show that the large observed obliquity ofmore » Kepler 56 system is consistent with a dynamical origin. In addition, we use observations by Huber et al. to derive the obliquity's probability distribution function, thus improving the constrained lower limit. The outer planet may be the cause of the inner planets' large obliquities, and we give the probability distribution function of its inclination, which depends on the initial orbital configuration of the planetary system. We show that even in the presence of precise measurement of the true obliquity, one cannot distinguish the initial configurations. Finally we consider the fate of the system as the star continues to evolve beyond the main sequence, and we find that the obliquity of the system will not undergo major variations as the star climbs the red giant branch. We follow the evolution of the system and find that the innermost planet will be engulfed in ∼129 Myr. Furthermore we put an upper limit of ∼155 Myr for the engulfment of the second planet. This corresponds to ∼3% of the current age of the star.« less
Theory and simulation of the time-dependent rate coefficients of diffusion-influenced reactions.
Zhou, H X; Szabo, A
1996-01-01
A general formalism is developed for calculating the time-dependent rate coefficient k(t) of an irreversible diffusion-influenced reaction. This formalism allows one to treat most factors that affect k(t), including rotational Brownian motion and conformational gating of reactant molecules and orientation constraint for product formation. At long times k(t) is shown to have the asymptotic expansion k(infinity)[1 + k(infinity) (pie Dt)-1/2 /4 pie D + ...], where D is the relative translational diffusion constant. An approximate analytical method for calculating k(t) is presented. This is based on the approximation that the probability density of the reactant pair in the reactive region keeps the equilibrium distribution but with a decreasing amplitude. The rate coefficient then is determined by the Green function in the absence of chemical reaction. Within the framework of this approximation, two general relations are obtained. The first relation allows the rate coefficient for an arbitrary amplitude of the reactivity to be found if the rate coefficient for one amplitude of the reactivity is known. The second relation allows the rate coefficient in the presence of conformational gating to be found from that in the absence of conformational gating. The ratio k(t)/k(0) is shown to be the survival probability of the reactant pair at time t starting from an initial distribution that is localized in the reactive region. This relation forms the basis of the calculation of k(t) through Brownian dynamics simulations. Two simulation procedures involving the propagation of nonreactive trajectories initiated only from the reactive region are described and illustrated on a model system. Both analytical and simulation results demonstrate the accuracy of the equilibrium-distribution approximation method. PMID:8913584
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression
NASA Astrophysics Data System (ADS)
Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli
2018-06-01
Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.
NASA Astrophysics Data System (ADS)
Chakrabarty, Ayan; Wang, Feng; Sun, Kai; Wei, Qi-Huo
Prior studies have shown that low symmetry particles such as micro-boomerangs exhibit behaviour of Brownian motion rather different from that of high symmetry particles because convenient tracking points (TPs) are usually inconsistent with the center of hydrodynamic stress (CoH) where the translational and rotational motions are decoupled. In this paper we study the effects of the translation-rotation coupling on the displacement probability distribution functions (PDFs) of the boomerang colloid particles with symmetric arms. By tracking the motions of different points on the particle symmetry axis, we show that as the distance between the TP and the CoH is increased, the effects of translation-rotation coupling becomes pronounced, making the short-time 2D PDF for fixed initial orientation to change from elliptical to crescent shape and the angle averaged PDFs from ellipsoidal-particle-like PDF to a shape with a Gaussian top and long displacement tails. We also observed that at long times the PDFs revert to Gaussian. This crescent shape of 2D PDF provides a clear physical picture of the non-zero mean displacements observed in boomerangs particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
Modeling Periodic Adiabatic Shear Bands Evolution in a 304L Stainless Steel Thick-Walled Cylinder
NASA Astrophysics Data System (ADS)
Liu, Mingtao; Hu, Haibo; Fan, Cheng; Tang, Tiegang
2015-06-01
The self-organization of multiple shear bands in a 304L stainless steel thick-walled cylinder (TWC) was numerically studied. The microstructures of material lead to the non-uniform distribution of local yield stress, which plays a key role in the formation of spontaneous shear localization. We introduced a probability factor satisfied Gauss distribution into the macroscopic constitutive relationship to describe the non-uniformity of local yield stress. Using the probability factor, the initiation and propagation of multiple shear bands in TWC were numerically replicated in our 2D FEM simulation. Experimental results in the literature indicate that the machined surface at the internal boundary of a 304L stainless steel cylinder provides a work-hardened layer (about 20 μm) which has significantly different microstructures from base material. The work-hardened layer leads to the phenomenon that most shear bands are in clockwise or counterclockwise direction. In our simulation, periodic oriented perturbations were applied to describe the grain orientation in the work-hardened layer, and the spiral pattern of shear bands was successfully replicated.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Lee, S.-Y.; Barnes, C.G.; Snoke, A.W.; Howard, K.A.; Frost, C.D.
2003-01-01
Two groups of closely associated, peraluminous, two-mica granitic gneiss were identified in the area. The older, sparsely distributed unit is equigranular (EG) with initial ??Nd ??? -8??8 and initial 87Sr/86Sr ???0??7098. Its age is uncertain. The younger unit is Late Cretaceous (???80 Ma), pegmatitic, and sillimanite-bearing (KPG), with ??Nd from -15??8 to -17??3 and initial 87Sr/86Sr from 0??7157 to 0??7198. The concentrations of Fe, Mg, Na, Ca, Sr, V, Zr, Zn and Hf are higher, and K, Rb and Th are lower in the EG. Major- and trace-element models indicate that the KPG was derived by muscovite dehydration melting (<35 km depth) of Neoproterozoic metapelitic rocks that are widespread in the eastern Great Basin. The models are broadly consistent with anatexis of crust tectonically thickened during the Sevier orogeny; no mantle mass or heat contribution was necessary. As such, this unit represents one crustal end-member of regional Late Cretaceous peraluminous granites. The EG was produced by biotite dehydration melting at greater depths, with garnet stable in the residue. The source of the EG was probably Paleoproterozoic metagraywacke. Because EG magmatism probably pre-dated Late Cretaceous crustal thickening, it required heat input from the mantle or from mantle-derived magma.
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Second Cancers After Fractionated Radiotherapy: Stochastic Population Dynamics Effects
NASA Technical Reports Server (NTRS)
Sachs, Rainer K.; Shuryak, Igor; Brenner, David; Fakir, Hatim; Hahnfeldt, Philip
2007-01-01
When ionizing radiation is used in cancer therapy it can induce second cancers in nearby organs. Mainly due to longer patient survival times, these second cancers have become of increasing concern. Estimating the risk of solid second cancers involves modeling: because of long latency times, available data is usually for older, obsolescent treatment regimens. Moreover, modeling second cancers gives unique insights into human carcinogenesis, since the therapy involves administering well characterized doses of a well studied carcinogen, followed by long-term monitoring. In addition to putative radiation initiation that produces pre-malignant cells, inactivation (i.e. cell killing), and subsequent cell repopulation by proliferation can be important at the doses relevant to second cancer situations. A recent initiation/inactivation/proliferation (IIP) model characterized quantitatively the observed occurrence of second breast and lung cancers, using a deterministic cell population dynamics approach. To analyze ifradiation-initiated pre-malignant clones become extinct before full repopulation can occur, we here give a stochastic version of this I I model. Combining Monte Carlo simulations with standard solutions for time-inhomogeneous birth-death equations, we show that repeated cycles of inactivation and repopulation, as occur during fractionated radiation therapy, can lead to distributions of pre-malignant cells per patient with variance >> mean, even when pre-malignant clones are Poisson-distributed. Thus fewer patients would be affected, but with a higher probability, than a deterministic model, tracking average pre-malignant cell numbers, would predict. Our results are applied to data on breast cancers after radiotherapy for Hodgkin disease. The stochastic IIP analysis, unlike the deterministic one, indicates: a) initiated, pre-malignant cells can have a growth advantage during repopulation, not just during the longer tumor latency period that follows; b) weekend treatment gaps during radiotherapy, apart from decreasing the probability of eradicating the primary cancer, substantially increase the risk of later second cancers.
Tilting Uranus without a Collision
NASA Astrophysics Data System (ADS)
Rogoszinski, Zeeve; Hamilton, Douglas P.
2016-10-01
The most accepted hypothesis for the origin of Uranus' 98° obliquity is a giant collision during the late stages of planetary accretion. This model requires a single Earth mass object striking Uranus at high latitudes; such events occur with a probability of about 10%. Alternatively, Uranus' obliquity may have arisen from a sequence of smaller impactors which lead to a uniform distribution of obliquities. Here we explore a third model for tilting Uranus using secular spin-orbit resonance theory. We investigate early Solar System configurations in which a secular resonance between Uranus' axial precession frequency and another planet's orbital node precession frequency might occur.Thommes et al. (1999) hypothesized that Uranus and Neptune initially formed between Jupiter and Saturn, and were then kicked outward. In our scenario, Neptune leaves first while Uranus remains behind. As an exterior Neptune slowly migrates outward, it picks up both Uranus and Saturn in spin-orbit resonances (Ward and Hamilton 2004; Hamilton and Ward 2004). Only a distant Neptune has a nodal frequency slow enough to resonate with Uranus' axial precession.This scenario, with diverging orbits, results in resonance capture. As Neptune migrates outward its nodal precession slows. While in resonance, Uranus and Saturn each tilt a bit further, slowing their axial precession rates to continually match Neptune's nodal precession rate. Tilting Uranus to high obliquities takes a few 100 Myrs. This timescale may be too long to hold Uranus captive between Jupiter and Saturn, and we are investigating how to reduce it. We also find that resonance capture is rare if Uranus' initial obliquity is greater than about 10°, as the probability of capture decreases as the planet's initial obliquity increases. We will refine this estimate by quantifying capture statistics, and running accretion simulations to test the likelihood of a low early obliquity. Our preliminary findings show that most assumptions about planetary accretion lead to nearly isotropic obliquity distributions for early Uranus. Thus, the odds of Uranus having an initial low obliquity is also about 10%.
Axion excursions of the landscape during inflation
NASA Astrophysics Data System (ADS)
Palma, Gonzalo A.; Riquelme, Walter
2017-07-01
Because of their quantum fluctuations, axion fields had a chance to experience field excursions traversing many minima of their potentials during inflation. We study this situation by analyzing the dynamics of an axion field ψ , present during inflation, with a periodic potential given by v (ψ )=Λ4[1 -cos (ψ /f )]. By assuming that the vacuum expectation value of the field is stabilized at one of its minima, say, ψ =0 , we compute every n -point correlation function of ψ up to first order in Λ4 using the in-in formalism. This computation allows us to identify the distribution function describing the probability of measuring ψ at a particular amplitude during inflation. Because ψ is able to tunnel between the barriers of the potential, we find that the probability distribution function consists of a non-Gaussian multimodal distribution such that the probability of measuring ψ at a minimum of v (ψ ) different from ψ =0 increases with time. As a result, at the end of inflation, different patches of the Universe are characterized by different values of the axion field amplitude, leading to important cosmological phenomenology: (a) Isocurvature fluctuations induced by the axion at the end of inflation could be highly non-Gaussian. (b) If the axion defines the strength of standard model couplings, then one is led to a concrete realization of the multiverse. (c) If the axion corresponds to dark matter, one is led to the possibility that, within our observable Universe, dark matter started with a nontrivial initial condition, implying novel signatures for future surveys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Positive phase space distributions and uncertainty relations
NASA Technical Reports Server (NTRS)
Kruger, Jan
1993-01-01
In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.
NASA Astrophysics Data System (ADS)
Castle, James R.; CMS Collaboration
2017-11-01
Flow harmonic fluctuations are studied for PbPb collisions at √{sNN} = 5.02 TeV using the CMS detector at the LHC. Flow harmonic probability distributions p(v2) are obtained by unfolding smearing effects from observed azimuthal anisotropy distributions using particles of 0.3
Work fluctuations for Bose particles in grand canonical initial states.
Yi, Juyeon; Kim, Yong Woon; Talkner, Peter
2012-05-01
We consider bosons in a harmonic trap and investigate the fluctuations of the work performed by an adiabatic change of the trap curvature. Depending on the reservoir conditions such as temperature and chemical potential that provide the initial equilibrium state, the exponentiated work average (EWA) defined in the context of the Crooks relation and the Jarzynski equality may diverge if the trap becomes wider. We investigate how the probability distribution function (PDF) of the work signals this divergence. It is shown that at low temperatures the PDF is highly asymmetric with a steep fall-off at one side and an exponential tail at the other side. For high temperatures it is closer to a symmetric distribution approaching a Gaussian form. These properties of the work PDF are discussed in relation to the convergence of the EWA and to the existence of the hypothetical equilibrium state to which those thermodynamic potential changes refer that enter both the Crooks relation and the Jarzynski equality.
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
Power-law decay exponents: A dynamical criterion for predicting thermalization
NASA Astrophysics Data System (ADS)
Távora, Marco; Torres-Herrera, E. J.; Santos, Lea F.
2017-01-01
From the analysis of the relaxation process of isolated lattice many-body quantum systems quenched far from equilibrium, we deduce a criterion for predicting when they are certain to thermalize. It is based on the algebraic behavior ∝t-γ of the survival probability at long times. We show that the value of the power-law exponent γ depends on the shape and filling of the weighted energy distribution of the initial state. Two scenarios are explored in detail: γ ≥2 and γ <1 . Exponents γ ≥2 imply that the energy distribution of the initial state is ergodically filled and the eigenstates are uncorrelated, so thermalization is guaranteed to happen. In this case, the power-law behavior is caused by bounds in the energy spectrum. Decays with γ <1 emerge when the energy eigenstates are correlated and signal lack of ergodicity. They are typical of systems undergoing localization due to strong onsite disorder and are found also in clean integrable systems.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm
NASA Astrophysics Data System (ADS)
Sun, Haisheng; Xu, Rui; Chen, Huaping
2018-04-01
To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.
Lim, Hyang-Tag; Hong, Kang-Hee; Kim, Yoon-Ho
2015-10-21
Quantum coherence and entanglement, which are essential resources for quantum information, are often degraded and lost due to decoherence. Here, we report a proof-of-principle experimental demonstration of high fidelity entanglement distribution over decoherence channels via qubit transduction. By unitarily switching the initial qubit encoding to another, which is insensitive to particular forms of decoherence, we have demonstrated that it is possible to avoid the effect of decoherence completely. In particular, we demonstrate high-fidelity distribution of photonic polarization entanglement over quantum channels with two types of decoherence, amplitude damping and polarization-mode dispersion, via qubit transduction between polarization qubits and dual-rail qubits. These results represent a significant breakthrough in quantum communication over decoherence channels as the protocol is input-state independent, requires no ancillary photons and symmetries, and has near-unity success probability.
Lim, Hyang-Tag; Hong, Kang-Hee; Kim, Yoon-Ho
2015-01-01
Quantum coherence and entanglement, which are essential resources for quantum information, are often degraded and lost due to decoherence. Here, we report a proof-of-principle experimental demonstration of high fidelity entanglement distribution over decoherence channels via qubit transduction. By unitarily switching the initial qubit encoding to another, which is insensitive to particular forms of decoherence, we have demonstrated that it is possible to avoid the effect of decoherence completely. In particular, we demonstrate high-fidelity distribution of photonic polarization entanglement over quantum channels with two types of decoherence, amplitude damping and polarization-mode dispersion, via qubit transduction between polarization qubits and dual-rail qubits. These results represent a significant breakthrough in quantum communication over decoherence channels as the protocol is input-state independent, requires no ancillary photons and symmetries, and has near-unity success probability. PMID:26487083
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
NASA Astrophysics Data System (ADS)
Su, Zhaofeng; Guan, Ji; Li, Lvzhou
2018-01-01
Quantum entanglement is an indispensable resource for many significant quantum information processing tasks. However, in practice, it is difficult to distribute quantum entanglement over a long distance, due to the absorption and noise in quantum channels. A solution to this challenge is a quantum repeater, which can extend the distance of entanglement distribution. In this scheme, the time consumption of classical communication and local operations takes an important place with respect to time efficiency. Motivated by this observation, we consider a basic quantum repeater scheme that focuses on not only the optimal rate of entanglement concentration but also the complexity of local operations and classical communication. First, we consider the case where two different two-qubit pure states are initially distributed in the scenario. We construct a protocol with the optimal entanglement-concentration rate and less consumption of local operations and classical communication. We also find a criterion for the projective measurements to achieve the optimal probability of creating a maximally entangled state between the two ends. Second, we consider the case in which two general pure states are prepared and general measurements are allowed. We get an upper bound on the probability for a successful measurement operation to produce a maximally entangled state without any further local operations.
NASA Astrophysics Data System (ADS)
Li, Y.; Gong, H.; Zhu, L.; Guo, L.; Gao, M.; Zhou, C.
2016-12-01
Continuous over-exploitation of groundwater causes dramatic drawdown, and leads to regional land subsidence in the Huairou Emergency Water Resources region, which is located in the up-middle part of the Chaobai river basin of Beijing. Owing to the spatial heterogeneity of strata's lithofacies of the alluvial fan, ground deformation has no significant positive correlation with groundwater drawdown, and one of the challenges ahead is to quantify the spatial distribution of strata's lithofacies. The transition probability geostatistics approach provides potential for characterizing the distribution of heterogeneous lithofacies in the subsurface. Combined the thickness of clay layer extracted from the simulation, with deformation field acquired from PS-InSAR technology, the influence of strata's lithofacies on land subsidence can be analyzed quantitatively. The strata's lithofacies derived from borehole data were generalized into four categories and their probability distribution in the observe space was mined by using the transition probability geostatistics, of which clay was the predominant compressible material. Geologically plausible realizations of lithofacies distribution were produced, accounting for complex heterogeneity in alluvial plain. At a particular probability level of more than 40 percent, the volume of clay defined was 55 percent of the total volume of strata's lithofacies. This level, equaling nearly the volume of compressible clay derived from the geostatistics, was thus chosen to represent the boundary between compressible and uncompressible material. The method incorporates statistical geological information, such as distribution proportions, average lengths and juxtaposition tendencies of geological types, mainly derived from borehole data and expert knowledge, into the Markov chain model of transition probability. Some similarities of patterns were indicated between the spatial distribution of deformation field and clay layer. In the area with roughly similar water table decline, locations in the subsurface having a higher probability for the existence of compressible material occur more than that in the location with a lower probability. Such estimate of spatial probability distribution is useful to analyze the uncertainty of land subsidence.
The exact probability distribution of the rank product statistics for replicated experiments.
Eisinga, Rob; Breitling, Rainer; Heskes, Tom
2013-03-18
The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Modeling the probability distribution of peak discharge for infiltrating hillslopes
NASA Astrophysics Data System (ADS)
Baiamonte, Giorgio; Singh, Vijay P.
2017-07-01
Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
Role of social environment and social clustering in spread of opinions in coevolving networks.
Malik, Nishant; Mucha, Peter J
2013-12-01
Taking a pragmatic approach to the processes involved in the phenomena of collective opinion formation, we investigate two specific modifications to the coevolving network voter model of opinion formation studied by Holme and Newman [Phys. Rev. E 74, 056108 (2006)]. First, we replace the rewiring probability parameter by a distribution of probability of accepting or rejecting opinions between individuals, accounting for heterogeneity and asymmetric influences in relationships between individuals. Second, we modify the rewiring step by a path-length-based preference for rewiring that reinforces local clustering. We have investigated the influences of these modifications on the outcomes of simulations of this model. We found that varying the shape of the distribution of probability of accepting or rejecting opinions can lead to the emergence of two qualitatively distinct final states, one having several isolated connected components each in internal consensus, allowing for the existence of diverse opinions, and the other having a single dominant connected component with each node within that dominant component having the same opinion. Furthermore, more importantly, we found that the initial clustering in the network can also induce similar transitions. Our investigation also indicates that these transitions are governed by a weak and complex dependence on system size. We found that the networks in the final states of the model have rich structural properties including the small world property for some parameter regimes.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
NASA Astrophysics Data System (ADS)
Valageas, P.
2000-02-01
In this article we present an analytical calculation of the probability distribution of the magnification of distant sources due to weak gravitational lensing from non-linear scales. We use a realistic description of the non-linear density field, which has already been compared with numerical simulations of structure formation within hierarchical scenarios. Then, we can directly express the probability distribution P(mu ) of the magnification in terms of the probability distribution of the density contrast realized on non-linear scales (typical of galaxies) where the local slope of the initial linear power-spectrum is n=-2. We recover the behaviour seen by numerical simulations: P(mu ) peaks at a value slightly smaller than the mean < mu >=1 and it shows an extended large mu tail (as described in another article our predictions also show a good quantitative agreement with results from N-body simulations for a finite smoothing angle). Then, we study the effects of weak lensing on the derivation of the cosmological parameters from SNeIa. We show that the inaccuracy introduced by weak lensing is not negligible: {cal D}lta Omega_mega_m >~ 0.3 for two observations at z_s=0.5 and z_s=1. However, observations can unambiguously discriminate between Omega_mega_m =0.3 and Omega_mega_m =1. Moreover, in the case of a low-density universe one can clearly distinguish an open model from a flat cosmology (besides, the error decreases as the number of observ ed SNeIa increases). Since distant sources are more likely to be ``demagnified'' the most probable value of the observed density parameter Omega_mega_m is slightly smaller than its actual value. On the other hand, one may obtain some valuable information on the properties of the underlying non-linear density field from the measure of weak lensing distortions.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Reliability of windstorm predictions in the ECMWF ensemble prediction system
NASA Astrophysics Data System (ADS)
Becker, Nico; Ulbrich, Uwe
2016-04-01
Windstorms caused by extratropical cyclones are one of the most dangerous natural hazards in the European region. Therefore, reliable predictions of such storm events are needed. Case studies have shown that ensemble prediction systems (EPS) are able to provide useful information about windstorms between two and five days prior to the event. In this work, ensemble predictions with the European Centre for Medium-Range Weather Forecasts (ECMWF) EPS are evaluated in a four year period. Within the 50 ensemble members, which are initialized every 12 hours and are run for 10 days, windstorms are identified and tracked in time and space. By using a clustering approach, different predictions of the same storm are identified in the different ensemble members and compared to reanalysis data. The occurrence probability of the predicted storms is estimated by fitting a bivariate normal distribution to the storm track positions. Our results show, for example, that predicted storm clusters with occurrence probabilities of more than 50% have a matching observed storm in 80% of all cases at a lead time of two days. The predicted occurrence probabilities are reliable up to 3 days lead time. At longer lead times the occurrence probabilities are overestimated by the EPS.
NASA Astrophysics Data System (ADS)
Lozovatsky, I.; Fernando, H. J. S.; Planella-Morato, J.; Liu, Zhiyu; Lee, J.-H.; Jinadasa, S. U. P.
2017-10-01
The probability distribution of turbulent kinetic energy dissipation rate in stratified ocean usually deviates from the classic lognormal distribution that has been formulated for and often observed in unstratified homogeneous layers of atmospheric and oceanic turbulence. Our measurements of vertical profiles of micro-scale shear, collected in the East China Sea, northern Bay of Bengal, to the south and east of Sri Lanka, and in the Gulf Stream region, show that the probability distributions of the dissipation rate ɛ˜r in the pycnoclines (r ˜ 1.4 m is the averaging scale) can be successfully modeled by the Burr (type XII) probability distribution. In weakly stratified boundary layers, lognormal distribution of ɛ˜r is preferable, although the Burr is an acceptable alternative. The skewness Skɛ and the kurtosis Kɛ of the dissipation rate appear to be well correlated in a wide range of Skɛ and Kɛ variability.
Airframe integrity based on Bayesian approach
NASA Astrophysics Data System (ADS)
Hurtado Cahuao, Jose Luis
Aircraft aging has become an immense challenge in terms of ensuring the safety of the fleet while controlling life cycle costs. One of the major concerns in aircraft structures is the development of fatigue cracks in the fastener holes. A probabilistic-based method has been proposed to manage this problem. In this research, the Bayes' theorem is used to assess airframe integrity by updating generic data with airframe inspection data while such data are compiled. This research discusses the methodology developed for assessment of loss of airframe integrity due to fatigue cracking in the fastener holes of an aging platform. The methodology requires a probability density function (pdf) at the end of SAFE life. Subsequently, a crack growth regime begins. As the Bayesian analysis requires information of a prior initial crack size pdf, such a pdf is assumed and verified to be lognormally distributed. The prior distribution of crack size as cracks grow is modeled through a combined Inverse Power Law (IPL) model and lognormal relationships. The first set of inspections is used as the evidence for updating the crack size distribution at the various stages of aircraft life. Moreover, the materials used in the structural part of the aircrafts have variations in their properties due to their calibration errors and machine alignment. A Matlab routine (PCGROW) is developed to calculate the crack distribution growth through three different crack growth models. As the first step, the material properties and the initial crack size are sampled. A standard Monte Carlo simulation is employed for this sampling process. At the corresponding aircraft age, the crack observed during the inspections, is used to update the crack size distribution and proceed in time. After the updating, it is possible to estimate the probability of structural failure as a function of flight hours for a given aircraft in the future. The results show very accurate and useful values related to the reliability and integrity of airframes in aging aircrafts. Inspection data shown in this dissertation are not the actual data from known aircrafts and are only used to demonstrate the methodologies.
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.
1998-01-01
The wind profile with all of its variations with respect to altitude has been, is now, and will continue to be important for aerospace vehicle design and operations. Wind profile databases and models are used for the vehicle ascent flight design for structural wind loading, flight control systems, performance analysis, and launch operations. This report presents the evolution of wind statistics and wind models from the empirical scalar wind profile model established for the Saturn Program through the development of the vector wind profile model used for the Space Shuttle design to the variations of this wind modeling concept for the X-33 program. Because wind is a vector quantity, the vector wind models use the rigorous mathematical probability properties of the multivariate normal probability distribution. When the vehicle ascent steering commands (ascent guidance) are wind biased to the wind profile measured on the day-of-launch, ascent structural wind loads are reduced and launch probability is increased. This wind load alleviation technique is recommended in the initial phase of vehicle development. The vehicle must fly through the largest load allowable versus altitude to achieve its mission. The Gumbel extreme value probability distribution is used to obtain the probability of exceeding (or not exceeding) the load allowable. The time conditional probability function is derived from the Gumbel bivariate extreme value distribution. This time conditional function is used for calculation of wind loads persistence increments using 3.5-hour Jimsphere wind pairs. These increments are used to protect the commit-to-launch decision. Other topics presented include the Shuttle Shuttle load-response to smoothed wind profiles, a new gust model, and advancements in wind profile measuring systems. From the lessons learned and knowledge gained from past vehicle programs, the development of future launch vehicles can be accelerated. However, new vehicle programs by their very nature will require specialized support for new databases and analyses for wind, atmospheric parameters (pressure, temperature, and density versus altitude), and weather. It is for this reason that project managers are encouraged to collaborate with natural environment specialists early in the conceptual design phase. Such action will give the lead time necessary to meet the natural environment design and operational requirements, and thus, reduce development costs.
1978-03-01
for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented
Performance bounds on parallel self-initiating discrete-event
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso
2016-01-01
ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2017-02-15
Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.
Impact of Targeted Programs on Health Systems: A Case Study of the Polio Eradication Initiative
Loevinsohn, Benjamin; Aylward, Bruce; Steinglass, Robert; Ogden, Ellyn; Goodman, Tracey; Melgaard, Bjorn
2002-01-01
The results of 2 large field studies on the impact of the polio eradication initiative on health systems and 3 supplementary reports were presented at a December 1999 meeting convened by the World Health Organization. All of these studies concluded that positive synergies exist between polio eradication and health systems but that these synergies have not been vigorously exploited. The eradication of polio has probably improved health systems worldwide by broadening distribution of vitamin A supplements, improving cooperation among enterovirus laboratories, and facilitating linkages between health workers and their communities. The results of these studies also show that eliminating polio did not cause a diminution of funding for immunization against other illnesses. Relatively little is known about the opportunity costs of polio eradication. Improved planning in disease eradication initiatives can minimize disruptions in the delivery of other services. Future initiatives should include indicators and baseline data for monitoring effects on health systems development. PMID:11772750
A stochastic Markov chain model to describe lung cancer growth and metastasis.
Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter
2012-01-01
A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.
Lindauer, Andreas; Laveille, Christian; Stockis, Armel
2017-11-01
To quantify the relationship between exposure to lacosamide monotherapy and seizure probability, and to simulate the effect of changing the dose regimen. Structural time-to-event models for dropouts (not because of a lack of efficacy) and seizures were developed using data from 883 adult patients newly diagnosed with epilepsy and experiencing focal or generalized tonic-clonic seizures, participating in a trial (SP0993; ClinicalTrials.gov identifier: NCT01243177) comparing the efficacy of lacosamide and carbamazepine controlled-release monotherapy. Lacosamide dropout and seizure models were used for simulating the effect of changing the initial target dose on seizure freedom. Repeated time-to-seizure data were described by a Weibull distribution with parameters estimated separately for the first and subsequent seizures. Daily area under the plasma concentration-time curve was related linearly to the log-hazard. Disease severity, expressed as the number of seizures during the 3 months before the trial (baseline), was a strong predictor of seizure probability: patients with 7-50 seizures at baseline had a 2.6-fold (90% confidence interval 2.01-3.31) higher risk of seizures compared with the reference two to six seizures. Simulations suggested that a 400-mg/day, rather than a 200-mg/day initial target dose for patients with seven or more seizures at baseline could potentially result in an additional 8% of seizure-free patients for 6 months at the last evaluated dose level. Patients receiving lacosamide had a slightly lower dropout risk compared with those receiving carbamazepine. Baseline disease severity was the most important predictor of seizure probability. Simulations suggest that an initial target dose >200 mg/day could potentially benefit patients with greater disease severity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Chen, Z; Nath, R
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick
2012-01-01
Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension
NASA Astrophysics Data System (ADS)
Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek
2018-04-01
We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
ERIC Educational Resources Information Center
Conant, Darcy Lynn
2013-01-01
Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…
A risk assessment method for multi-site damage
NASA Astrophysics Data System (ADS)
Millwater, Harry Russell, Jr.
This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.
Comparison of forward and backward pp pair knockout in 3He(e,e'pp)n
NASA Astrophysics Data System (ADS)
Baghdasaryan, H.; Weinstein, L. B.; Laget, J. M.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anghinolfi, M.; Ball, J.; Battaglieri, M.; Biselli, A. S.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Cole, P. L.; Contalbrigo, M.; Crede, V.; D'Angelo, A.; Daniel, A.; Dashyan, N.; De Sanctis, E.; De Vita, R.; Djalali, C.; Dodge, G. E.; Doughty, D.; Dupre, R.; Egiyan, H.; El Alaoui, A.; El Fassi, L.; Elouadrhiri, L.; Fedotov, G.; Gabrielyan, M. Y.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Gohn, W.; Gothe, R. W.; Griffioen, K. A.; Guegan, B.; Guidal, M.; Hafidi, K.; Hicks, K.; Hyde, C. E.; Ireland, D. G.; Ishkhanov, B. S.; Jenkins, D.; Jo, H. S.; Joo, K.; Khandaker, M.; Khetarpal, P.; Kim, A.; Kim, W.; Kubarovsky, A.; Kubarovsky, V.; Kuhn, S. E.; Kuleshov, S. V.; Kvaltine, N. D.; Lu, H. Y.; MacGregor, I. J. D.; McKinnon, B.; Mirazita, M.; Mokeev, V.; Moutarde, H.; Munevar, E.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Paolone, M.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Pisano, S.; Pozdniakov, S.; Procureur, S.; Raue, B. A.; Ricco, G.; Rimal, D.; Ripani, M.; Rosner, G.; Rossi, P.; Saini, M. S.; Saylor, N. A.; Schott, D.; Schumacher, R. A.; Seraydaryan, H.; Smith, E. S.; Sober, D. I.; Sokan, D.; Stepanyan, S. S.; Stepanyan, S.; Strauch, S.; Taiuti, M.; Tang, W.; Tkachenko, S.; Voskanyan, H.; Voutier, E.; Wood, M. H.; Zana, L.; Zhao, B.
2012-06-01
Measuring nucleon-nucleon short range correlations (SRCs) has been a goal of the nuclear physics community for many years. They are an important part of the nuclear wave function, accounting for almost all of the high-momentum strength. They are closely related to the EMC effect. While their overall probability has been measured, measuring their momentum distributions is more difficult. In order to determine the best configuration for studying SRC momentum distributions, we measured the 3He(e,e'pp)n reaction, looking at events with high-momentum protons (pp>0.35 GeV/c) and a low-momentum neutron (pn<0.2 GeV/c). We examined two angular configurations: either both protons emitted forward or one proton emitted forward and one backward (with respect to the momentum transfer, q⃗). The measured relative momentum distribution of the events with one forward and one backward proton was much closer to the calculated initial-state pp relative momentum distribution, indicating that this is the preferred configuration for measuring SRC.
Probability distributions of the electroencephalogram envelope of preterm infants.
Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro
2015-06-01
To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2014-01-01
Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Bayesian ionospheric multi-instrument 3D tomography
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Vierinen, Juha; Roininen, Lassi
2017-04-01
The tomographic reconstruction of ionospheric electron densities is an inverse problem that cannot be solved without relatively strong regularising additional information. % Especially the vertical electron density profile is determined predominantly by the regularisation. % %Often utilised regularisations in ionospheric tomography include smoothness constraints and iterative methods with initial ionospheric models. % Despite its crucial role, the regularisation is often hidden in the algorithm as a numerical procedure without physical understanding. % % The Bayesian methodology provides an interpretative approach for the problem, as the regularisation can be given in a physically meaningful and quantifiable prior probability distribution. % The prior distribution can be based on ionospheric physics, other available ionospheric measurements and their statistics. % Updating the prior with measurements results as the posterior distribution that carries all the available information combined. % From the posterior distribution, the most probable state of the ionosphere can then be solved with the corresponding probability intervals. % Altogether, the Bayesian methodology provides understanding on how strong the given regularisation is, what is the information gained with the measurements and how reliable the final result is. % In addition, the combination of different measurements and temporal development can be taken into account in a very intuitive way. However, a direct implementation of the Bayesian approach requires inversion of large covariance matrices resulting in computational infeasibility. % In the presented method, Gaussian Markov random fields are used to form a sparse matrix approximations for the covariances. % The approach makes the problem computationally feasible while retaining the probabilistic and physical interpretation. Here, the Bayesian method with Gaussian Markov random fields is applied for ionospheric 3D tomography over Northern Europe. % Multi-instrument measurements are utilised from TomoScand receiver network for Low Earth orbit beacon satellite signals, GNSS receiver networks, as well as from EISCAT ionosondes and incoherent scatter radars. % %The performance is demonstrated in three-dimensional spatial domain with temporal development also taken into account.
Stylized facts in internal rates of return on stock index and its derivative transactions
NASA Astrophysics Data System (ADS)
Pichl, Lukáš; Kaizoji, Taisei; Yamano, Takuya
2007-08-01
Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in normalized log returns, the probability distributions for such single asset encounters incorporate the time factor by means of the internal rate of return, defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting difference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be useful in deducing the type of investment strategy from trading revenues of small portfolio investors.
Probabilistic Reasoning for Robustness in Automated Planning
NASA Technical Reports Server (NTRS)
Schaffer, Steven; Clement, Bradley; Chien, Steve
2007-01-01
A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.
Mathematical Model to estimate the wind power using four-parameter Burr distribution
NASA Astrophysics Data System (ADS)
Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu
2018-03-01
When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Work probability distribution for a ferromagnet with long-ranged and short-ranged correlations
NASA Astrophysics Data System (ADS)
Bhattacharjee, J. K.; Kirkpatrick, T. R.; Sengers, J. V.
2018-04-01
Work fluctuations and work probability distributions are fundamentally different in systems with short-ranged versus long-ranged correlations. Specifically, in systems with long-ranged correlations the work distribution is extraordinarily broad compared to systems with short-ranged correlations. This difference profoundly affects the possible applicability of fluctuation theorems like the Jarzynski fluctuation theorem. The Heisenberg ferromagnet, well below its Curie temperature, is a system with long-ranged correlations in very low magnetic fields due to the presence of Goldstone modes. As the magnetic field is increased the correlations gradually become short ranged. Hence, such a ferromagnet is an ideal system for elucidating the changes of the work probability distribution as one goes from a domain with long-ranged correlations to a domain with short-ranged correlations by tuning the magnetic field. A quantitative analysis of this crossover behavior of the work probability distribution and the associated fluctuations is presented.
Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken
2013-11-01
Elicitation is a technique that can be used to obtain probability distribution from experts about unknown quantities. We conducted a methodology review of reports where probability distributions had been elicited from experts to be used in model-based health technology assessments. Databases including MEDLINE, EMBASE and the CRD database were searched from inception to April 2013. Reference lists were checked and citation mapping was also used. Studies describing their approach to the elicitation of probability distributions were included. Data was abstracted on pre-defined aspects of the elicitation technique. Reports were critically appraised on their consideration of the validity, reliability and feasibility of the elicitation exercise. Fourteen articles were included. Across these studies, the most marked features were heterogeneity in elicitation approach and failure to report key aspects of the elicitation method. The most frequently used approaches to elicitation were the histogram technique and the bisection method. Only three papers explicitly considered the validity, reliability and feasibility of the elicitation exercises. Judged by the studies identified in the review, reports of expert elicitation are insufficient in detail and this impacts on the perceived usability of expert-elicited probability distributions. In this context, the wider credibility of elicitation will only be improved by better reporting and greater standardisation of approach. Until then, the advantage of eliciting probability distributions from experts may be lost.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.
1991-01-01
The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Trapping dynamics of xenon on Pt(111)
NASA Astrophysics Data System (ADS)
Arumainayagam, Christopher R.; Madix, Robert J.; Mcmaster, Mark C.; Suzawa, Valerie M.; Tully, John C.
1990-02-01
The dynamics of Xe trapping on Pt(111) was studied using supersonic atomic beam techniques. Initial trapping probabilities ( S0) were measured directly as a function of incident translational energy ( EinT) and angle of incidence (θ i) at a surface temperature ( Tins) 95 K. The initial trapping probability decreases smoothly with increasing ET cosθ i;, rather than ET cos 2θ i, suggesting participation of parallel momentum in the trapping process. Accordingly, the measured initial trapping probability falls off more slowly with increasing incident translational energy than predicted by one-dimensional theories. This finding is in near agreement with previous mean translational energy measurements for Xe desorbing near the Pt(111) surface normal, assuming detailed balance applies. Three-dimensional stochastic classical trajectory calculations presented herein also exhibit the importance of tangential momentum in trapping and satisfactorily reproduce the experimental initial trapping probabilities.
Measurement of the electron shake-off in the β-decay of laser-trapped 6He atoms
NASA Astrophysics Data System (ADS)
Hong, Ran; Bagdasarova, Yelena; Garcia, Alejandro; Storm, Derek; Sternberg, Matthew; Swanson, Erik; Wauters, Frederik; Zumwalt, David; Bailey, Kevin; Leredde, Arnaud; Mueller, Peter; O'Connor, Thomas; Flechard, Xavier; Liennard, Etienne; Knecht, Andreas; Naviliat-Cuncic, Oscar
2016-03-01
Electron shake-off is an important process in many high precision nuclear β-decay measurements searching for physics beyond the standard model. 6He being one of the lightest β-decaying isotopes, has a simple atomic structure. Thus, it is well suited for testing calculations of shake-off effects. Shake-off probabilities from the 23S1 and 23P2 initial states of laser trapped 6He matter for the on-going beta-neutrino correlation study at the University of Washington. These probabilities are obtained by analyzing the time-of-flight distribution of the recoil ions detected in coincidence with the beta particles. A β-neutrino correlation independent analysis approach was developed. The measured upper limit of the double shake-off probability is 2 ×10-4 at 90% confidence level. This result is ~100 times lower than the most recent calculation by Schulhoff and Drake. This work is supported by DOE, Office of Nuclear Physics, under Contract Nos. DE-AC02-06CH11357 and DE-FG02-97ER41020.
Li, Pan; Zhou, Yu-Ying; Lu, Da; Wang, Yan; Zhang, Hui-Hong
2016-05-01
Although the neuropathologic changes and diagnostic criteria for the neurodegenerative disorder Alzheimer's disease (AD) are well-established, the clinical symptoms vary largely. Symptomatically, frontal variant of AD (fv-AD) presents very similarly to behavioral variant frontotemporal dementia (bvFTD), which creates major challenges for differential diagnosis. Here, we report two patients who present with progressive cognitive impairment, early and prominent behavioral features, and significant frontotemporal lobe atrophy on magnetic resonance imaging, consistent with an initial diagnosis of probable bvFTD. However, multimodal functional neuroimaging revealed neuropathological data consistent with a diagnosis of probable AD for one patient (pathology distributed in the frontal lobes) and a diagnosis of probable bvFTD for the other patient (hypometabolism in the bilateral frontal lobes). In addition, the fv-AD patient presented with greater executive impairment and milder behavioral symptoms relative to the bvFTD patient. These cases highlight that recognition of these atypical syndromes using detailed neuropsychological tests, biomarkers, and multimodal neuroimaging will lead to greater accuracy in diagnosis and patient management.
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O.
2017-01-01
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks. PMID:28555023
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks.
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O
2017-05-27
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks.
Heterogeneous network epidemics: real-time growth, variance and extinction of infection.
Ball, Frank; House, Thomas
2017-09-01
Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.
Devi, Suma Priya Sudarsana; Howe, James R.
2016-01-01
Key points Purkinje cells of the cerebellum receive ∼180,000 parallel fibre synapses, which have often been viewed as a homogeneous synaptic population and studied using single action potentials.Many parallel fibre synapses might be silent, however, and granule cells in vivo fire in bursts. Here, we used trains of stimuli to study parallel fibre inputs to Purkinje cells in rat cerebellar slices.Analysis of train EPSCs revealed two synaptic components, phase 1 and 2. Phase 1 is initially large and saturates rapidly, whereas phase 2 is initially small and facilitates throughout the train. The two components have a heterogeneous distribution at dendritic sites and different pharmacological profiles.The differential sensitivity of phase 1 and phase 2 to inhibition by pentobarbital and NBQX mirrors the differential sensitivity of AMPA receptors associated with the transmembrane AMPA receptor regulatory protein, γ‐2, gating in the low‐ and high‐open probability modes, respectively. Abstract Cerebellar granule cells fire in bursts, and their parallel fibre axons (PFs) form ∼180,000 excitatory synapses onto the dendritic tree of a Purkinje cell. As many as 85% of these synapses have been proposed to be silent, but most are labelled for AMPA receptors. Here, we studied PF to Purkinje cell synapses using trains of 100 Hz stimulation in rat cerebellar slices. The PF train EPSC consisted of two components that were present in variable proportions at different dendritic sites: one, with large initial EPSC amplitude, saturated after three stimuli and dominated the early phase of the train EPSC; and the other, with small initial amplitude, increased steadily throughout the train of 10 stimuli and dominated the late phase of the train EPSC. The two phases also displayed different pharmacological profiles. Phase 2 was less sensitive to inhibition by NBQX but more sensitive to block by pentobarbital than phase 1. Comparison of synaptic results with fast glutamate applications to recombinant receptors suggests that the high‐open‐probability gating mode of AMPA receptors containing the auxiliary subunit transmembrane AMPA receptor regulatory protein γ‐2 makes a substantial contribution to phase 2. We argue that the two synaptic components arise from AMPA receptors with different functional signatures and synaptic distributions. Comparisons of voltage‐ and current‐clamp responses obtained from the same Purkinje cells indicate that phase 1 of the EPSC arises from synapses ideally suited to transmit short bursts of action potentials, whereas phase 2 is likely to arise from low‐release‐probability or ‘silent’ synapses that are recruited during longer bursts. PMID:27094216
NASA Astrophysics Data System (ADS)
Atencia, A.; Llasat, M. C.; Garrote, L.; Mediero, L.
2010-10-01
The performance of distributed hydrological models depends on the resolution, both spatial and temporal, of the rainfall surface data introduced. The estimation of quantitative precipitation from meteorological radar or satellite can improve hydrological model results, thanks to an indirect estimation at higher spatial and temporal resolution. In this work, composed radar data from a network of three C-band radars, with 6-minutal temporal and 2 × 2 km2 spatial resolution, provided by the Catalan Meteorological Service, is used to feed the RIBS distributed hydrological model. A Window Probability Matching Method (gage-adjustment method) is applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation in both convective and stratiform Z/R relations used over Catalonia. Once the rainfall field has been adequately obtained, an advection correction, based on cross-correlation between two consecutive images, was introduced to get several time resolutions from 1 min to 30 min. Each different resolution is treated as an independent event, resulting in a probable range of input rainfall data. This ensemble of rainfall data is used, together with other sources of uncertainty, such as the initial basin state or the accuracy of discharge measurements, to calibrate the RIBS model using probabilistic methodology. A sensitivity analysis of time resolutions was implemented by comparing the various results with real values from stream-flow measurement stations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Versino, Daniele; Bronkhorst, Curt Allan
The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less
Sudden transition and sudden change from open spin environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Zheng-Da; School of Science, Jiangnan University, Wuxi 214122; Xu, Jing-Bo, E-mail: xujb@zju.edu.cn
2014-11-15
We investigate the necessary conditions for the existence of sudden transition or sudden change phenomenon for appropriate initial states under dephasing. As illustrative examples, we study the behaviors of quantum correlation dynamics of two noninteracting qubits in independent and common open spin environments, respectively. For the independent environments case, we find that the quantum correlation dynamics is closely related to the Loschmidt echo and the dynamics exhibits a sudden transition from classical to quantum correlation decay. It is also shown that the sudden change phenomenon may occur for the common environment case and stationary quantum discord is found at themore » high temperature region of the environment. Finally, we investigate the quantum criticality of the open spin environment by exploring the probability distribution of the Loschmidt echo and the scaling transformation behavior of quantum discord, respectively. - Highlights: • Sudden transition or sudden change from open spin baths are studied. • Quantum discord is related to the Loschmidt echo in independent open spin baths. • Steady quantum discord is found in a common open spin bath. • The probability distribution of the Loschmidt echo is analyzed. • The scaling transformation behavior of quantum discord is displayed.« less
A numerical study of multiple adiabatic shear bands evolution in a 304LSS thick-walled cylinder
NASA Astrophysics Data System (ADS)
Liu, Mingtao; Hu, Haibo; Fan, Cheng; Tang, Tiegang
2017-01-01
The self-organization of multiple shear bands in a 304L stainless steel(304LSS) thick-walled cylinder (TWC) was numerically studied. The microstructures of material lead to the non-uniform distribution of the local yield stress, which play a key role in the formation of spontaneous shear localization. We introduced a probability factor satisfied the Gaussian distribution into the macroscopic constitutive relationship to describe the non-uniformity of local yield stress. Using the probability factor, the initiation and propagation of multiple shear bands in TWC were numerically replicated in our 2D FEM simulation. Experimental results in the literature indicated that the machined surface at the internal boundary of a 304L stainless steel cylinder provides a work-hardened layer (about 20˜30μm) which has significantly different microstructures from the base material. The work-hardened layer leads to the phenomenon that most shear bands propagate along a given direction, clockwise or counterclockwise. In our simulation, periodical single direction spiral perturbations were applied to describe the grain orientation in the work-hardened layer, and the single direction spiral pattern of shear bands was successfully replicated.
Network analysis of the hominin origin of Herpes Simplex virus 2 from fossil data
Underdown, Simon J.; Kumar, Krishna
2017-01-01
Abstract Herpes simplex virus 2 (HSV2) is a human herpesvirus found worldwide that causes genital lesions and more rarely causes encephalitis. This pathogen is most common in Africa, and particularly in central and east Africa, an area of particular significance for the evolution of modern humans. Unlike HSV1, HSV2 has not simply co-speciated with humans from their last common ancestor with primates. HSV2 jumped the species barrier between 1.4 and 3 MYA, most likely through intermediate but unknown hominin species. In this article, we use probability-based network analysis to determine the most probable transmission path between intermediate hosts of HSV2, from the ancestors of chimpanzees to the ancestors of modern humans, using paleo-environmental data on the distribution of African tropical rainforest over the last 3 million years and data on the age and distribution of fossil species of hominin present in Africa between 1.4 and 3 MYA. Our model identifies Paranthropus boisei as the most likely intermediate host of HSV2, while Homo habilis may also have played a role in the initial transmission of HSV2 from the ancestors of chimpanzees to P.boisei. PMID:28979799
Network analysis of the hominin origin of Herpes Simplex virus 2 from fossil data.
Underdown, Simon J; Kumar, Krishna; Houldcroft, Charlotte
2017-07-01
Herpes simplex virus 2 (HSV2) is a human herpesvirus found worldwide that causes genital lesions and more rarely causes encephalitis. This pathogen is most common in Africa, and particularly in central and east Africa, an area of particular significance for the evolution of modern humans. Unlike HSV1, HSV2 has not simply co-speciated with humans from their last common ancestor with primates. HSV2 jumped the species barrier between 1.4 and 3 MYA, most likely through intermediate but unknown hominin species. In this article, we use probability-based network analysis to determine the most probable transmission path between intermediate hosts of HSV2, from the ancestors of chimpanzees to the ancestors of modern humans, using paleo-environmental data on the distribution of African tropical rainforest over the last 3 million years and data on the age and distribution of fossil species of hominin present in Africa between 1.4 and 3 MYA. Our model identifies Paranthropus boisei as the most likely intermediate host of HSV2, while Homo habilis may also have played a role in the initial transmission of HSV2 from the ancestors of chimpanzees to P.boisei .
Chakrabarty, Ayan; Wang, Feng; Sun, Kai; Wei, Qi-Huo
2016-05-11
Prior studies have shown that low symmetry particles such as micro-boomerangs exhibit behaviour of Brownian motion rather different from that of high symmetry particles because convenient tracking points (TPs) are usually inconsistent with their center of hydrodynamic stress (CoH) where the translational and rotational motions are decoupled. In this paper we study the effects of the translation-rotation coupling on the displacement probability distribution functions (PDFs) of the boomerang colloid particles with symmetric arm length. By tracking the motions of different points on the particle symmetry axis, we show that as the distance between the TP and the CoH is increased, the effects of translation-rotation coupling becomes pronounced, making the short-time 2D PDF for fixed initial orientation to change from elliptical, to bean and then to crescent shape, and the angle averaged PDFs change from ellipsoidal-particle-like PDF to a shape with a Gaussian top and long displacement tails. We also observed that at long times the PDFs revert to Gaussian. These 2D PDF shapes provide a clear physical picture of the non-zero mean displacements observed in boomerangs particles.
Training models of anatomic shape variability
Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang
2008-01-01
Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919
An evaluation of procedures to estimate monthly precipitation probabilities
NASA Astrophysics Data System (ADS)
Legates, David R.
1991-01-01
Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.
The contribution of occupation to health inequality
Ravesteijn, Bastian; van Kippersluis, Hans; van Doorslaer, Eddy
2014-01-01
Health is distributed unequally by occupation. Workers on a lower rung of the occupational ladder report worse health, have a higher probability of disability and die earlier than workers higher up the occupational hierarchy. Using a theoretical framework that unveils some of the potential mechanisms underlying these disparities, three core insights emerge: (i) there is selection into occupation on the basis of initial wealth, education, and health, (ii) there will be behavioural responses to adverse working conditions, which can have compensating or reinforcing effects on health, and (iii) workplace conditions increase health inequalities if workers with initially low socioeconomic status choose harmful occupations and don’t offset detrimental health effects. We provide empirical illustrations of these insights using data for the Netherlands and assess the evidence available in the economics literature. PMID:24899789
q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations
NASA Astrophysics Data System (ADS)
Katz, Yuri A.; Tian, Li
2013-10-01
We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1
NASA Astrophysics Data System (ADS)
Jing, R.; Lin, N.; Emanuel, K.; Vecchi, G. A.; Knutson, T. R.
2017-12-01
A Markov environment-dependent hurricane intensity model (MeHiM) is developed to simulate the climatology of hurricane intensity given the surrounding large-scale environment. The model considers three unobserved discrete states representing respectively storm's slow, moderate, and rapid intensification (and deintensification). Each state is associated with a probability distribution of intensity change. The storm's movement from one state to another, regarded as a Markov chain, is described by a transition probability matrix. The initial state is estimated with a Bayesian approach. All three model components (initial intensity, state transition, and intensity change) are dependent on environmental variables including potential intensity, vertical wind shear, midlevel relative humidity, and ocean mixing characteristics. This dependent Markov model of hurricane intensity shows a significant improvement over previous statistical models (e.g., linear, nonlinear, and finite mixture models) in estimating the distributions of 6-h and 24-h intensity change, lifetime maximum intensity, and landfall intensity, etc. Here we compare MeHiM with various dynamical models, including a global climate model [High-Resolution Forecast-Oriented Low Ocean Resolution model (HiFLOR)], a regional hurricane model (Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model), and a simplified hurricane dynamic model [Coupled Hurricane Intensity Prediction System (CHIPS)] and its newly developed fast simulator. The MeHiM developed based on the reanalysis data is applied to estimate the intensity of simulated storms to compare with the dynamical-model predictions under the current climate. The dependences of hurricanes on the environment under current and future projected climates in the various models will also be compared statistically.
Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.
1979-01-01
Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Astrophysics Data System (ADS)
Elber Duverger, James; Boudreau-Béland, Jonathan; Le, Minh Duc; Comtois, Philippe
2014-11-01
Self-organization of pacemaker (PM) activity of interconnected elements is important to the general theory of reaction-diffusion systems as well as for applications such as PM activity in cardiac tissue to initiate beating of the heart. Monolayer cultures of neonatal rat ventricular myocytes (NRVMs) are often used as experimental models in studies on cardiac electrophysiology. These monolayers exhibit automaticity (spontaneous activation) of their electrical activity. At low plated density, cells usually show a heterogeneous population consisting of PM and quiescent excitable cells (QECs). It is therefore highly probable that monolayers of NRVMs consist of a heterogeneous network of the two cell types. However, the effects of density and spatial distribution of the PM cells on spontaneous activity of monolayers remain unknown. Thus, a simple stochastic pattern formation algorithm was implemented to distribute PM and QECs in a binary-like 2D network. A FitzHugh-Nagumo excitable medium was used to simulate electrical spontaneous and propagating activity. Simulations showed a clear nonlinear dependency of spontaneous activity (occurrence and amplitude of spontaneous period) on the spatial patterns of PM cells. In most simulations, the first initiation sites were found to be located near the substrate boundaries. Comparison with experimental data obtained from cardiomyocyte monolayers shows important similarities in the position of initiation site activity. However, limitations in the model that do not reflect the complex beat-to-beat variation found in experiments indicate the need for a more realistic cardiomyocyte representation.
Driscoll, Ira; Shumaker, Sally A; Snively, Beverly M; Margolis, Karen L; Manson, JoAnn E; Vitolins, Mara Z; Rossom, Rebecca C; Espeland, Mark A
2016-12-01
Nonhuman studies suggest a protective effect of caffeine on cognition. Although human literature remains less consistent, reviews suggest a possible favorable relationship between caffeine consumption and cognitive impairment or dementia. We investigated the relationship between caffeine intake and incidence of cognitive impairment or probable dementia in women aged 65 and older from the Women's Health Initiative Memory Study. All women with self-reported caffeine consumption at enrollment were included (N = 6,467). In 10 years or less of follow-up with annual assessments of cognitive function, 388 of these women received a diagnosis of probable dementia based on a 4-phase protocol that included central adjudication. We used proportional hazards regression to assess differences in the distributions of times until incidence of probable dementia or composite cognitive impairment among women grouped by baseline level of caffeine intake, adjusting for risk factors (hormone therapy, age, race, education, body mass index, sleep quality, depression, hypertension, prior cardiovascular disease, diabetes, smoking, and alcohol consumption). Women consuming above median levels (mean intake = 261mg) of caffeine intake for this group were less likely to develop incident dementia (hazard ratio = 0.74, 95% confidence interval [0.56, 0.99], p = .04) or any cognitive impairment (hazard ratio = 0.74, confidence interval [0.60, 0.91], p = .005) compared to those consuming below median amounts (mean intake = 64mg) of caffeine for this group. Our findings suggest lower odds of probable dementia or cognitive impairment in older women whose caffeine consumption was above median for this group and are consistent with the existing literature showing an inverse association between caffeine intake and age-related cognitive impairment. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad
2017-10-01
The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Work probability distribution and tossing a biased coin
NASA Astrophysics Data System (ADS)
Saha, Arnab; Bhattacharjee, Jayanta K.; Chakraborty, Sagar
2011-01-01
We show that the rare events present in dissipated work that enters Jarzynski equality, when mapped appropriately to the phenomenon of large deviations found in a biased coin toss, are enough to yield a quantitative work probability distribution for the Jarzynski equality. This allows us to propose a recipe for constructing work probability distribution independent of the details of any relevant system. The underlying framework, developed herein, is expected to be of use in modeling other physical phenomena where rare events play an important role.
Anytime synthetic projection: Maximizing the probability of goal satisfaction
NASA Technical Reports Server (NTRS)
Drummond, Mark; Bresina, John L.
1990-01-01
A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.
Rapidity window dependences of higher order cumulants and diffusion master equation
NASA Astrophysics Data System (ADS)
Kitazawa, Masakiyo
2015-10-01
We study the rapidity window dependences of higher order cumulants of conserved charges observed in relativistic heavy ion collisions. The time evolution and the rapidity window dependence of the non-Gaussian fluctuations are described by the diffusion master equation. Analytic formulas for the time evolution of cumulants in a rapidity window are obtained for arbitrary initial conditions. We discuss that the rapidity window dependences of the non-Gaussian cumulants have characteristic structures reflecting the non-equilibrium property of fluctuations, which can be observed in relativistic heavy ion collisions with the present detectors. It is argued that various information on the thermal and transport properties of the hot medium can be revealed experimentally by the study of the rapidity window dependences, especially by the combined use, of the higher order cumulants. Formulas of higher order cumulants for a probability distribution composed of sub-probabilities, which are useful for various studies of non-Gaussian cumulants, are also presented.
Stochastic analysis of a pulse-type prey-predator model
NASA Astrophysics Data System (ADS)
Wu, Y.; Zhu, W. Q.
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Stochastic analysis of a pulse-type prey-predator model.
Wu, Y; Zhu, W Q
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Probabilistic Modeling of the Renal Stone Formation Module
NASA Technical Reports Server (NTRS)
Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.
2013-01-01
The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carney, D.; Kvitek, R.G.
1990-12-01
The report provides an evaluation of the impacts of the bunker C fuel oil spill on the shallow subtidal benthic communities of the Washington coast. The study is designed to provide a subtidal extension of the intertidal investigation performed by Battelle Laboratories. As such, the study sites and many of the methodologies are the same. There are four objectives of the study. They are: (1) to identify and define from existing data, the probable distribution of subtidal deposits along the Washington coast, (2) to document petroleum hydrocarbon contamination in shallow subtidal sediments in the Olympic National Park and along themore » Washington outer coast, (3) to characterize petroleum hydrocarbon contamination in molluscan and other species' tissues of opportunity in subtidal habitats along the Washington outer coast, and (4) to collect the initial faunal and sediment samples required for possible future analyses should oil-spill related hydrocarbons be detected from initial sediment and tissue analyses.« less
Hybrid computer technique yields random signal probability distributions
NASA Technical Reports Server (NTRS)
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
AUTOCLASS III - AUTOMATIC CLASS DISCOVERY FROM DATA
NASA Technical Reports Server (NTRS)
Cheeseman, P. C.
1994-01-01
The program AUTOCLASS III, Automatic Class Discovery from Data, uses Bayesian probability theory to provide a simple and extensible approach to problems such as classification and general mixture separation. Its theoretical basis is free from ad hoc quantities, and in particular free of any measures which alter the data to suit the needs of the program. As a result, the elementary classification model used lends itself easily to extensions. The standard approach to classification in much of artificial intelligence and statistical pattern recognition research involves partitioning of the data into separate subsets, known as classes. AUTOCLASS III uses the Bayesian approach in which classes are described by probability distributions over the attributes of the objects, specified by a model function and its parameters. The calculation of the probability of each object's membership in each class provides a more intuitive classification than absolute partitioning techniques. AUTOCLASS III is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or omitted. The user specifies a class probability distribution function by associating attribute sets with supplied likelihood function terms. AUTOCLASS then searches in the space of class numbers and parameters for the maximally probable combination. It returns the set of class probability function parameters, and the class membership probabilities for each data instance. AUTOCLASS III is written in Common Lisp, and is designed to be platform independent. This program has been successfully run on Symbolics and Explorer Lisp machines. It has been successfully used with the following implementations of Common LISP on the Sun: Franz Allegro CL, Lucid Common Lisp, and Austin Kyoto Common Lisp and similar UNIX platforms; under the Lucid Common Lisp implementations on VAX/VMS v5.4, VAX/Ultrix v4.1, and MIPS/Ultrix v4, rev. 179; and on the Macintosh personal computer. The minimum Macintosh required is the IIci. This program will not run under CMU Common Lisp or VAX/VMS DEC Common Lisp. A minimum of 8Mb of RAM is required for Macintosh platforms and 16Mb for workstations. The standard distribution medium for this program is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 3.5 inch diskette in Macintosh format. An electronic copy of the documentation is included on the distribution medium. AUTOCLASS was developed between March 1988 and March 1992. It was initially released in May 1991. Sun is a trademark of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation. Macintosh is a trademark of Apple Computer, Inc. Allegro CL is a registered trademark of Franz, Inc.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2014 CFR
2014-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2011 CFR
2011-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
How Can Histograms Be Useful for Introducing Continuous Probability Distributions?
ERIC Educational Resources Information Center
Derouet, Charlotte; Parzysz, Bernard
2016-01-01
The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…
ERIC Educational Resources Information Center
Moses, Tim; Oh, Hyeonjoo J.
2009-01-01
Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Wilks, Daniel S.
1993-10-01
Performance of 8 three-parameter probability distributions for representing annual extreme and partial duration precipitation data at stations in the northeastern and southeastern United States is investigated. Particular attention is paid to fidelity on the right tail, through use of a bootstrap procedure simulating extrapolation on the right tail beyond the data. It is found that the beta-κ distribution best describes the extreme right tail of annual extreme series, and the beta-P distribution is best for the partial duration data. The conventionally employed two-parameter Gumbel distribution is found to substantially underestimate probabilities associated with the larger precipitation amounts for both annual extreme and partial duration data. Fitting the distributions using left-censored data did not result in improved fits to the right tail.
On the generation of climate model ensembles
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy; Phipps, Steven J.
2014-10-01
Climate model ensembles are used to estimate uncertainty in future projections, typically by interpreting the ensemble distribution for a particular variable probabilistically. There are, however, different ways to produce climate model ensembles that yield different results, and therefore different probabilities for a future change in a variable. Perhaps equally importantly, there are different approaches to interpreting the ensemble distribution that lead to different conclusions. Here we use a reduced-resolution climate system model to compare three common ways to generate ensembles: initial conditions perturbation, physical parameter perturbation, and structural changes. Despite these three approaches conceptually representing very different categories of uncertainty within a modelling system, when comparing simulations to observations of surface air temperature they can be very difficult to separate. Using the twentieth century CMIP5 ensemble for comparison, we show that initial conditions ensembles, in theory representing internal variability, significantly underestimate observed variance. Structural ensembles, perhaps less surprisingly, exhibit over-dispersion in simulated variance. We argue that future climate model ensembles may need to include parameter or structural perturbation members in addition to perturbed initial conditions members to ensure that they sample uncertainty due to internal variability more completely. We note that where ensembles are over- or under-dispersive, such as for the CMIP5 ensemble, estimates of uncertainty need to be treated with care.
Dispersion and Lifetime of the SO2 Cloud from the August 2008 Kasatochi Eruption
NASA Technical Reports Server (NTRS)
Krotkov, N. A.; Schoeberl, M. R.; Morris, G. A.; Carn, S.; Yang, K.
2010-01-01
Hemispherical dispersion of the SO2 cloud from the August 2008 Kasatochi eruption is analyzed using satellite data from the Ozone Monitoring Instrument (OMI) and the Goddard Trajectory Model (GTM). The operational OMI retrievals underestimate the total SO2 mass by 20-30% on 8-11 August, as compared with more accurate offline Extended Iterative Spectral Fit (EISF) retrievals, but the error decreases with time due to plume dispersion and a drop in peak SO2 column densities. The GTM runs were initialized with and compared to the operational OMI SO2 data during early plume dispersion to constrain SO2 plume heights and eruption times. The most probable SO2 heights during initial dispersion are estimated to be 10-12 km, in agreement with direct height retrievals using EISF algorithm and IR measurements. Using these height constraints a forward GTM run was initialized on 11 August to compare with the month-long Kasatochi SO2 cloud dispersion patterns. Predicted volcanic cloud locations generally agree with OMI observations, although some discrepancies were observed. Operational OMI SO2 burdens were refined using GTM-predicted mass-weighted probability density height distributions. The total refined SO2 mass was integrated over the Northern Hemisphere to place empirical constraints on the SO2 chemical decay rate. The resulting lower limit of the Kasatochi SO2 e-folding time is approx.8-9 days. Extrapolation of the exponential decay back in time yields an initial erupted SO2 mass of approx.2.2 Tg on 8 August, twice as much as the measured mass on that day.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
On the inequivalence of the CH and CHSH inequalities due to finite statistics
NASA Astrophysics Data System (ADS)
Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.
2017-06-01
Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.
Bayesian lead time estimation for the Johns Hopkins Lung Project data.
Jang, Hyejeong; Kim, Seongho; Wu, Dongfeng
2013-09-01
Lung cancer screening using X-rays has been controversial for many years. A major concern is whether lung cancer screening really brings any survival benefits, which depends on effective treatment after early detection. The problem was analyzed from a different point of view and estimates were presented of the projected lead time for participants in a lung cancer screening program using the Johns Hopkins Lung Project (JHLP) data. The newly developed method of lead time estimation was applied where the lifetime T was treated as a random variable rather than a fixed value, resulting in the number of future screenings for a given individual is a random variable. Using the actuarial life table available from the United States Social Security Administration, the lifetime distribution was first obtained, then the lead time distribution was projected using the JHLP data. The data analysis with the JHLP data shows that, for a male heavy smoker with initial screening ages at 50, 60, and 70, the probability of no-early-detection with semiannual screens will be 32.16%, 32.45%, and 33.17%, respectively; while the mean lead time is 1.36, 1.33 and 1.23 years. The probability of no-early-detection increases monotonically when the screening interval increases, and it increases slightly as the initial age increases for the same screening interval. The mean lead time and its standard error decrease when the screening interval increases for all age groups, and both decrease when initial age increases with the same screening interval. The overall mean lead time estimated with a random lifetime T is slightly less than that with a fixed value of T. This result is hoped to be of benefit to improve current screening programs. Copyright © 2013 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.
Confidence as Bayesian Probability: From Neural Origins to Behavior.
Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F
2015-10-07
Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Optimizing the clinical utility of PCA3 to diagnose prostate cancer in initial prostate biopsy.
Rubio-Briones, Jose; Borque, Angel; Esteban, Luis M; Casanova, Juan; Fernandez-Serra, Antonio; Rubio, Luis; Casanova-Salas, Irene; Sanz, Gerardo; Domínguez-Escrig, Jose; Collado, Argimiro; Gómez-Ferrer, Alvaro; Iborra, Inmaculada; Ramírez-Backhaus, Miguel; Martínez, Francisco; Calatrava, Ana; Lopez-Guerrero, Jose A
2015-09-11
PCA3 has been included in a nomogram outperforming previous clinical models for the prediction of any prostate cancer (PCa) and high grade PCa (HGPCa) at the initial prostate biopsy (IBx). Our objective is to validate such IBx-specific PCA3-based nomogram. We also aim to optimize the use of this nomogram in clinical practice through the definition of risk groups. Independent external validation. Clinical and biopsy data from a contemporary cohort of 401 men with the same inclusion criteria to those used to build up the reference's nomogram in IBx. The predictive value of the nomogram was assessed by means of calibration curves and discrimination ability through the area under the curve (AUC). Clinical utility of the nomogram was analyzed by choosing thresholds points that minimize the overlapping between probability density functions (PDF) in PCa and no PCa and HGPCa and no HGPCa groups, and net benefit was assessed by decision curves. We detect 28% of PCa and 11 % of HGPCa in IBx, contrasting to the 46 and 20% at the reference series. Due to this, there is an overestimation of the nomogram probabilities shown in the calibration curve for PCa. The AUC values are 0.736 for PCa (C.I.95%:0.68-0.79) and 0.786 for HGPCa (C.I.95%:0.71-0.87) showing an adequate discrimination ability. PDF show differences in the distributions of nomogram probabilities in PCa and not PCa patient groups. A minimization of the overlapping between these curves confirms the threshold probability of harboring PCa >30 % proposed by Hansen is useful to indicate a IBx, but a cut-off > 40% could be better in series of opportunistic screening like ours. Similar results appear in HGPCa analysis. The decision curve also shows a net benefit of 6.31% for the threshold probability of 40%. PCA3 is an useful tool to select patients for IBx. Patients with a calculated probability of having PCa over 40% should be counseled to undergo an IBx if opportunistic screening is required.
A Track Initiation Method for the Underwater Target Tracking Environment
NASA Astrophysics Data System (ADS)
Li, Dong-dong; Lin, Yang; Zhang, Yao
2018-04-01
A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.
A comparative analysis of hazard models for predicting debris flows in Madison County, VA
Morrissey, Meghan M.; Wieczorek, Gerald F.; Morgan, Benjamin A.
2001-01-01
During the rainstorm of June 27, 1995, roughly 330-750 mm of rain fell within a sixteen-hour period, initiating floods and over 600 debris flows in a small area (130 km2) of Madison County, Virginia. Field studies showed that the majority (70%) of these debris flows initiated with a thickness of 0.5 to 3.0 m in colluvium on slopes from 17 o to 41 o (Wieczorek et al., 2000). This paper evaluated and compared the approaches of SINMAP, LISA, and Iverson's (2000) transient response model for slope stability analysis by applying each model to the landslide data from Madison County. Of these three stability models, only Iverson's transient response model evaluated stability conditions as a function of time and depth. Iverson?s model would be the preferred method of the three models to evaluate landslide hazards on a regional scale in areas prone to rain-induced landslides as it considers both the transient and spatial response of pore pressure in its calculation of slope stability. The stability calculation used in SINMAP and LISA is similar and utilizes probability distribution functions for certain parameters. Unlike SINMAP that only considers soil cohesion, internal friction angle and rainfall-rate distributions, LISA allows the use of distributed data for all parameters, so it is the preferred model to evaluate slope stability over SINMAP. Results from all three models suggested similar soil and hydrologic properties for triggering the landslides that occurred during the 1995 storm in Madison County, Virginia. The colluvium probably had cohesion of less than 2KPa. The root-soil system is above the failure plane and consequently root strength and tree surcharge had negligible effect on slope stability. The result that the final location of the water table was near the ground surface is supported by the water budget analysis of the rainstorm conducted by Smith et al. (1996).
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Russa, D
Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributionsmore » found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.« less
Shah, R; Worner, S P; Chapman, R B
2012-10-01
Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.
Distribution of shortest path lengths in a class of node duplication network models
NASA Astrophysics Data System (ADS)
Steinbock, Chanania; Biham, Ofer; Katzav, Eytan
2017-09-01
We present analytical results for the distribution of shortest path lengths (DSPL) in a network growth model which evolves by node duplication (ND). The model captures essential properties of the structure and growth dynamics of social networks, acquaintance networks, and scientific citation networks, where duplication mechanisms play a major role. Starting from an initial seed network, at each time step a random node, referred to as a mother node, is selected for duplication. Its daughter node is added to the network, forming a link to the mother node, and with probability p to each one of its neighbors. The degree distribution of the resulting network turns out to follow a power-law distribution, thus the ND network is a scale-free network. To calculate the DSPL we derive a master equation for the time evolution of the probability Pt(L =ℓ ) , ℓ =1 ,2 ,⋯ , where L is the distance between a pair of nodes and t is the time. Finding an exact analytical solution of the master equation, we obtain a closed form expression for Pt(L =ℓ ) . The mean distance 〈L〉 t and the diameter Δt are found to scale like lnt , namely, the ND network is a small-world network. The variance of the DSPL is also found to scale like lnt . Interestingly, the mean distance and the diameter exhibit properties of a small-world network, rather than the ultrasmall-world network behavior observed in other scale-free networks, in which 〈L〉 t˜lnlnt .
Bayesian network representing system dynamics in risk analysis of nuclear systems
NASA Astrophysics Data System (ADS)
Varuttamaseni, Athi
2011-12-01
A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
Measurements of gas hydrate formation probability distributions on a quasi-free water droplet
NASA Astrophysics Data System (ADS)
Maeda, Nobuo
2014-06-01
A High Pressure Automated Lag Time Apparatus (HP-ALTA) can measure gas hydrate formation probability distributions from water in a glass sample cell. In an HP-ALTA gas hydrate formation originates near the edges of the sample cell and gas hydrate films subsequently grow across the water-guest gas interface. It would ideally be desirable to be able to measure gas hydrate formation probability distributions of a single water droplet or mist that is freely levitating in a guest gas, but this is technically challenging. The next best option is to let a water droplet sit on top of a denser, immiscible, inert, and wall-wetting hydrophobic liquid to avoid contact of a water droplet with the solid walls. Here we report the development of a second generation HP-ALTA which can measure gas hydrate formation probability distributions of a water droplet which sits on a perfluorocarbon oil in a container that is coated with 1H,1H,2H,2H-Perfluorodecyltriethoxysilane. It was found that the gas hydrate formation probability distributions of such a quasi-free water droplet were significantly lower than those of water in a glass sample cell.
Fragment size distribution in viscous bag breakup of a drop
NASA Astrophysics Data System (ADS)
Kulkarni, Varun; Bulusu, Kartik V.; Plesniak, Michael W.; Sojka, Paul E.
2015-11-01
In this study we examine the drop size distribution resulting from the fragmentation of a single drop in the presence of a continuous air jet. Specifically, we study the effect of Weber number, We, and Ohnesorge number, Oh on the disintegration process. The regime of breakup considered is observed between 12 <= We <= 16 for Oh <= 0.1. Experiments are conducted using phase Doppler anemometry. Both the number and volume fragment size probability distributions are plotted. The volume probability distribution revealed a bi-modal behavior with two distinct peaks: one corresponding to the rim fragments and the other to the bag fragments. This behavior was suppressed in the number probability distribution. Additionally, we employ an in-house particle detection code to isolate the rim fragment size distribution from the total probability distributions. Our experiments showed that the bag fragments are smaller in diameter and larger in number, while the rim fragments are larger in diameter and smaller in number. Furthermore, with increasing We for a given Ohwe observe a large number of small-diameter drops and small number of large-diameter drops. On the other hand, with increasing Oh for a fixed We the opposite is seen.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W.
An account is given of the method used to quantify the risks accruing to the use of a radioisotope thermoelectric generator fueled by Pu-238 dioxide aboard the Space Shuttle-launched Ulysses mission. After using a Monte Carlo technique to develop probability distributions for the radiological consequences of a range of accident scenarios throughout the mission, factors affecting those consequences are identified in conjunction with their probability distributions. The functional relationship among all the factors is then established, and probability distributions for all factor effects are combined by means of a Monte Carlo technique.
Score distributions of gapped multiple sequence alignments down to the low-probability tail
NASA Astrophysics Data System (ADS)
Fieth, Pascal; Hartmann, Alexander K.
2016-08-01
Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
Landslide Probability Assessment by the Derived Distributions Technique
NASA Astrophysics Data System (ADS)
Muñoz, E.; Ochoa, A.; Martínez, H.
2012-12-01
Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model is used along with the soil characteristic curve (suction vs. moisture) and the Mohr-Coulomb failure criteria in order to calculate the FOS of the slope. Data from two slopes located on steep tropical regions of the cities of Medellín (Colombia) and Rio de Janeiro (Brazil) where used to verify the model's performance. The results indicated significant differences between the obtained FOS values and the behavior observed on the field. The model shows relatively high values of FOS that do not reflect the instability of the analyzed slopes. For the two cases studied, the application of a more simple reliability concept (as the Probability of Failure - PR and Reliability Index - β), instead of a FOS could lead to more realistic results.
Probability of success for phase III after exploratory biomarker analysis in phase II.
Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver
2017-05-01
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.
General formulation of long-range degree correlations in complex networks
NASA Astrophysics Data System (ADS)
Fujiki, Yuka; Takaguchi, Taro; Yakubo, Kousuke
2018-06-01
We provide a general framework for analyzing degree correlations between nodes separated by more than one step (i.e., beyond nearest neighbors) in complex networks. One joint and four conditional probability distributions are introduced to fully describe long-range degree correlations with respect to degrees k and k' of two nodes and shortest path length l between them. We present general relations among these probability distributions and clarify the relevance to nearest-neighbor degree correlations. Unlike nearest-neighbor correlations, some of these probability distributions are meaningful only in finite-size networks. Furthermore, as a baseline to determine the existence of intrinsic long-range degree correlations in a network other than inevitable correlations caused by the finite-size effect, the functional forms of these probability distributions for random networks are analytically evaluated within a mean-field approximation. The utility of our argument is demonstrated by applying it to real-world networks.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
Wang, Jihan; Yang, Kai
2014-07-01
An efficient operating room needs both little underutilised and overutilised time to achieve optimal cost efficiency. The probabilities of underrun and overrun of lists of cases can be estimated by a well defined duration distribution of the lists. To propose a method of predicting the probabilities of underrun and overrun of lists of cases using Type IV Pearson distribution to support case scheduling. Six years of data were collected. The first 5 years of data were used to fit distributions and estimate parameters. The data from the last year were used as testing data to validate the proposed methods. The percentiles of the duration distribution of lists of cases were calculated by Type IV Pearson distribution and t-distribution. Monte Carlo simulation was conducted to verify the accuracy of percentiles defined by the proposed methods. Operating rooms in John D. Dingell VA Medical Center, United States, from January 2005 to December 2011. Differences between the proportion of lists of cases that were completed within the percentiles of the proposed duration distribution of the lists and the corresponding percentiles. Compared with the t-distribution, the proposed new distribution is 8.31% (0.38) more accurate on average and 14.16% (0.19) more accurate in calculating the probabilities at the 10th and 90th percentiles of the distribution, which is a major concern of operating room schedulers. The absolute deviations between the percentiles defined by Type IV Pearson distribution and those from Monte Carlo simulation varied from 0.20 min (0.01) to 0.43 min (0.03). Operating room schedulers can rely on the most recent 10 cases with the same combination of surgeon and procedure(s) for distribution parameter estimation to plan lists of cases. Values are mean (SEM). The proposed Type IV Pearson distribution is more accurate than t-distribution to estimate the probabilities of underrun and overrun of lists of cases. However, as not all the individual case durations followed log-normal distributions, there was some deviation from the true duration distribution of the lists.
Causality, apparent ``superluminality,'' and reshaping in barrier penetration
NASA Astrophysics Data System (ADS)
Sokolovski, D.
2010-04-01
We consider tunneling of a nonrelativistic particle across a potential barrier. It is shown that the barrier acts as an effective beam splitter which builds up the transmitted pulse from the copies of the initial envelope shifted in the coordinate space backward relative to the free propagation. Although along each pathway causality is explicitly obeyed, in special cases reshaping can result an overall reduction of the initial envelope, accompanied by an arbitrary coordinate shift. In the case of a high barrier the delay amplitude distribution (DAD) mimics a Dirac δ function, the transmission amplitude is superoscillatory for finite momenta and tunneling leads to an accurate advancement of the (reduced) initial envelope by the barrier width. In the case of a wide barrier, initial envelope is accurately translated into the complex coordinate plane. The complex shift, given by the first moment of the DAD, accounts for both the displacement of the maximum of the transmitted probability density and the increase in its velocity. It is argued that analyzing apparent “superluminality” in terms of spacial displacements helps avoid contradiction associated with time parameters such as the phase time.
Scale-invariant properties of public-debt growth
NASA Astrophysics Data System (ADS)
Petersen, A. M.; Podobnik, B.; Horvatic, D.; Stanley, H. E.
2010-05-01
Public debt is one of the important economic variables that quantitatively describes a nation's economy. Because bankruptcy is a risk faced even by institutions as large as governments (e.g., Iceland), national debt should be strictly controlled with respect to national wealth. Also, the problem of eliminating extreme poverty in the world is closely connected to the study of extremely poor debtor nations. We analyze the time evolution of national public debt and find "convergence": initially less-indebted countries increase their debt more quickly than initially more-indebted countries. We also analyze the public debt-to-GDP ratio {\\cal R} , a proxy for default risk, and approximate the probability density function P({\\cal R}) with a Gamma distribution, which can be used to establish thresholds for sustainable debt. We also observe "convergence" in {\\cal R} : countries with initially small {\\cal R} increase their {\\cal R} more quickly than countries with initially large {\\cal R} . The scaling relationships for debt and {\\cal R} have practical applications, e.g. the Maastricht Treaty requires members of the European Monetary Union to maintain {\\cal R} < 0.6 .
Can we expect to predict climate if we cannot shadow weather?
NASA Astrophysics Data System (ADS)
Smith, Leonard
2010-05-01
What limits our ability to predict (or project) useful statistics of future climate? And how might we quantify those limits? In the early 1960s, Ed Lorenz illustrated one constraint on point forecasts of the weather (chaos) while noting another (model imperfections). In the mid-sixties he went on to discuss climate prediction, noting that chaos, per se, need not limit accurate forecasts of averages and the distributions that define climate. In short, chaos might place draconian limits on what we can say about a particular summer day in 2010 (or 2040), but it need not limit our ability to make accurate and informative statements about the weather over this summer as a whole, or climate distributions of the 2040's. If not chaos, what limits our ability to produce decision relevant probability distribution functions (PDFs)? Is this just a question of technology (raw computer power) and uncertain boundary conditions (emission scenarios)? Arguably, current model simulations of the Earth's climate are limited by model inadequacy: not that the initial or boundary conditions are unknown but that state-of-the-art models would not yield decision-relevant probability distributions even if they were known. Or to place this statement in an empirically falsifiable format: that in 2100 when the boundary conditions are known and computer power is (hopefully) sufficient to allow exhaustive exploration of today's state-of-the-art models: we will find today's models do not admit a trajectory consistent with our knowledge of the state of the earth in 2009 which would prove of decision support relevance for, say, 25 km, hourly resolution. In short: today's models cannot shadow the weather of this century even after the fact. Restating this conjecture in a more positive frame: a 2100 historian of science will be able to determine the highest space and time scales on which 2009 models could have (i) produced trajectories plausibly consistent with the (by then) observed twenty-first century and (ii) produced probability distributions useful as such for decision support. As it will be some time until such conjectures can be refuted, how might we best advise decision makers of the detail (specifically, space and time resolution of a quantity of interest as a function of lead-time) that it is rational to interpret model-based PDFs as decision-relevant probability distributions? Given the nonlinearities already incorporated in our models, how far into the future can one expect a simulation to get the temperature "right" given the simulation has precipitation badly "wrong"? When can biases in local temperature which melt model-ice no longer be dismissed, and neglected by presenting model-anomalies? At what lead times will feedbacks due to model inadequacies cause the 2007 model simulations to drift away from what today's basic science (and 2100 computer power) would suggest? How might one justify quantitative claims regarding "extreme events" (or NUMB weather)? Models are unlikely to forecast things they cannot shadow, or at least track. There is no constraint on rational scientists to take model distributions as their subjective probabilities, unless they believe the model is empirically adequate. How then are we to use today's simulations to inform today's decisions? Two approaches are considered. The first augments the model-based PDF with an explicit subjective-probability of a "Big Surprise". The second is to look not for a PDF but, following Solvency II, consider the risk from any event that cannot be ruled out at, say, the one in 200 level. The fact that neither approach provides the simplicity and apparent confidence of interpreting model-based PDFs as if they were objective probabilities does not contradict the claim that either might lead to better decision-making.
Strength and stability of microbial plugs in porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, A.K.; Sharma, M.M.; Georgiou, G.
1995-12-31
Mobility reduction induced by the growth and metabolism of bacteria in high-permeability layers of heterogeneous reservoirs is an economically attractive technique to improve sweep efficiency. This paper describes an experimental study conducted in sandpacks using an injected bacterium to investigate the strength and stability of microbial plugs in porous media. Successful convective transport of bacteria is important for achieving sufficient initial bacteria distribution. The chemotactic and diffusive fluxes are probably not significant even under static conditions. Mobility reduction depends upon the initial cell concentrations and increase in cell mass. For single or multiple static or dynamic growth techniques, permeability reductionmore » was approximately 70% of the original permeability. The stability of these microbial plugs to increases in pressure gradient and changes in cell physiology in a nutrient-depleted environment needs to be improved.« less
A tool for simulating collision probabilities of animals with marine renewable energy devices.
Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise
2017-01-01
The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Quantum key distribution without the wavefunction
NASA Astrophysics Data System (ADS)
Niestegge, Gerd
A well-known feature of quantum mechanics is the secure exchange of secret bit strings which can then be used as keys to encrypt messages transmitted over any classical communication channel. It is demonstrated that this quantum key distribution allows a much more general and abstract access than commonly thought. The results include some generalizations of the Hilbert space version of quantum key distribution, but are based upon a general nonclassical extension of conditional probability. A special state-independent conditional probability is identified as origin of the superior security of quantum key distribution; this is a purely algebraic property of the quantum logic and represents the transition probability between the outcomes of two consecutive quantum measurements.
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
Belcher, Wayne R.; Sweetkind, Donald S.; Elliott, Peggy E.
2002-01-01
The use of geologic information such as lithology and rock properties is important to constrain conceptual and numerical hydrogeologic models. This geologic information is difficult to apply explicitly to numerical modeling and analyses because it tends to be qualitative rather than quantitative. This study uses a compilation of hydraulic-conductivity measurements to derive estimates of the probability distributions for several hydrogeologic units within the Death Valley regional ground-water flow system, a geologically and hydrologically complex region underlain by basin-fill sediments, volcanic, intrusive, sedimentary, and metamorphic rocks. Probability distributions of hydraulic conductivity for general rock types have been studied previously; however, this study provides more detailed definition of hydrogeologic units based on lithostratigraphy, lithology, alteration, and fracturing and compares the probability distributions to the aquifer test data. Results suggest that these probability distributions can be used for studies involving, for example, numerical flow modeling, recharge, evapotranspiration, and rainfall runoff. These probability distributions can be used for such studies involving the hydrogeologic units in the region, as well as for similar rock types elsewhere. Within the study area, fracturing appears to have the greatest influence on the hydraulic conductivity of carbonate bedrock hydrogeologic units. Similar to earlier studies, we find that alteration and welding in the Tertiary volcanic rocks greatly influence hydraulic conductivity. As alteration increases, hydraulic conductivity tends to decrease. Increasing degrees of welding appears to increase hydraulic conductivity because welding increases the brittleness of the volcanic rocks, thus increasing the amount of fracturing.
Symplectic evolution of Wigner functions in Markovian open systems.
Brodier, O; Almeida, A M Ozorio de
2004-01-01
The Wigner function is known to evolve classically under the exclusive action of a quadratic Hamiltonian. If the system also interacts with the environment through Lindblad operators that are complex linear functions of position and momentum, then the general evolution is the convolution of a non-Hamiltonian classical propagation of the Wigner function with a phase space Gaussian that broadens in time. We analyze the consequences of this in the three generic cases of elliptic, hyperbolic, and parabolic Hamiltonians. The Wigner function always becomes positive in a definite time, which does not depend on the initial pure state. We observe the influence of classical dynamics and dissipation upon this threshold. We also derive an exact formula for the evolving linear entropy as the average of a narrowing Gaussian taken over a probability distribution that depends only on the initial state. This leads to a long time asymptotic formula for the growth of linear entropy. We finally discuss the possibility of recovering the initial state.
Monte Carlo calculations of diatomic molecule gas flows including rotational mode excitation
NASA Technical Reports Server (NTRS)
Yoshikawa, K. K.; Itikawa, Y.
1976-01-01
The direct simulation Monte Carlo method was used to solve the Boltzmann equation for flows of an internally excited nonequilibrium gas, namely, of rotationally excited homonuclear diatomic nitrogen. The semi-classical transition probability model of Itikawa was investigated for its ability to simulate flow fields far from equilibrium. The behavior of diatomic nitrogen was examined for several different nonequilibrium initial states that are subjected to uniform mean flow without boundary interactions. A sample of 1000 model molecules was observed as the gas relaxed to a steady state starting from three specified initial states. The initial states considered are: (1) complete equilibrium, (2) nonequilibrium, equipartition (all rotational energy states are assigned the mean energy level obtained at equilibrium with a Boltzmann distribution at the translational temperature), and (3) nonequipartition (the mean rotational energy is different from the equilibrium mean value with respect to the translational energy states). In all cases investigated the present model satisfactorily simulated the principal features of the relaxation effects in nonequilibrium flow of diatomic molecules.
Statistical representation of a spray as a point process
NASA Astrophysics Data System (ADS)
Subramaniam, S.
2000-10-01
The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed.
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Does Litter Size Variation Affect Models of Terrestrial Carnivore Extinction Risk and Management?
Devenish-Nelson, Eleanor S.; Stephens, Philip A.; Harris, Stephen; Soulsbury, Carl; Richards, Shane A.
2013-01-01
Background Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. Methodology/Principal Findings We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species – the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. Conclusion/Significance These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes. PMID:23469140
Does litter size variation affect models of terrestrial carnivore extinction risk and management?
Devenish-Nelson, Eleanor S; Stephens, Philip A; Harris, Stephen; Soulsbury, Carl; Richards, Shane A
2013-01-01
Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species - the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes.
Estrellas asociadas con planetas extrasolares vs. estrellas de tipo β Pictoris
NASA Astrophysics Data System (ADS)
Chavero, C.; Gómez, M.
In this contribution we initially confront physical properties of two groups of stars: the Planet Host Stars and the Vega-like objects. The Planet Host Star group has one or more planet mass object associated and the Vega-like stars have circumstellar disks. We have compiled magnitudes, colors, parallaxes, spectral types, etc. for these objects from the literature and analyzed the distribution of both groups. We find that the samples are very similar in metallicities, ages, and spatial distributions. Our analysis suggests that the circumstellar environments are probably different while the central objects have similar physical properties. This difference may explain, at least in part, why the Planet Host Stars form extra-solar planetary objects such as those detected by the Doppler effect while the Vega-like objects are not commonly associated with these planet-mass bodies.
Study of nonequilibrium work distributions from a fluctuating lattice Boltzmann model.
Nasarayya Chari, S Siva; Murthy, K P N; Inguva, Ramarao
2012-04-01
A system of ideal gas is switched from an initial equilibrium state to a final state not necessarily in equilibrium, by varying a macroscopic control variable according to a well-defined protocol. The distribution of work performed during the switching process is obtained. The equilibrium free energy difference, ΔF, is determined from the work fluctuation relation. Some of the work values in the ensemble shall be less than ΔF. We term these as ones that "violate" the second law of thermodynamics. A fluctuating lattice Boltzmann model has been employed to carry out the simulation of the switching experiment. Our results show that the probability of violation of the second law increases with the increase of switching time (τ) and tends to one-half in the reversible limit of τ→∞.
Environmental Risk Assessment Strategy for Nanomaterials.
Scott-Fordsmand, Janeck J; Peijnenburg, Willie J G M; Semenzin, Elena; Nowack, Bernd; Hunt, Neil; Hristozov, Danail; Marcomini, Antonio; Irfan, Muhammad-Adeel; Jiménez, Araceli Sánchez; Landsiedel, Robert; Tran, Lang; Oomen, Agnes G; Bos, Peter M J; Hund-Rinke, Kerstin
2017-10-19
An Environmental Risk Assessment (ERA) for nanomaterials (NMs) is outlined in this paper. Contrary to other recent papers on the subject, the main data requirements, models and advancement within each of the four risk assessment domains are described, i.e., in the: (i) materials, (ii) release, fate and exposure, (iii) hazard and (iv) risk characterisation domains. The material, which is obviously the foundation for any risk assessment, should be described according to the legislatively required characterisation data. Characterisation data will also be used at various levels within the ERA, e.g., exposure modelling. The release, fate and exposure data and models cover the input for environmental distribution models in order to identify the potential (PES) and relevant exposure scenarios (RES) and, subsequently, the possible release routes, both with regard to which compartment(s) NMs are distributed in line with the factors determining the fate within environmental compartment. The initial outcome in the risk characterisation will be a generic Predicted Environmental Concentration (PEC), but a refined PEC can be obtained by applying specific exposure models for relevant media. The hazard information covers a variety of representative, relevant and reliable organisms and/or functions, relevant for the RES and enabling a hazard characterisation. The initial outcome will be hazard characterisation in test systems allowing estimating a Predicted No-Effect concentration (PNEC), either based on uncertainty factors or on a NM adapted version of the Species Sensitivity Distributions approach. The risk characterisation will either be based on a deterministic risk ratio approach (i.e., PEC/PNEC) or an overlay of probability distributions, i.e., exposure and hazard distributions, using the nano relevant models.
Environmental Risk Assessment Strategy for Nanomaterials
Scott-Fordsmand, Janeck J.; Nowack, Bernd; Hunt, Neil; Hristozov, Danail; Marcomini, Antonio; Irfan, Muhammad-Adeel; Jiménez, Araceli Sánchez; Landsiedel, Robert; Tran, Lang; Oomen, Agnes G.; Bos, Peter M. J.
2017-01-01
An Environmental Risk Assessment (ERA) for nanomaterials (NMs) is outlined in this paper. Contrary to other recent papers on the subject, the main data requirements, models and advancement within each of the four risk assessment domains are described, i.e., in the: (i) materials, (ii) release, fate and exposure, (iii) hazard and (iv) risk characterisation domains. The material, which is obviously the foundation for any risk assessment, should be described according to the legislatively required characterisation data. Characterisation data will also be used at various levels within the ERA, e.g., exposure modelling. The release, fate and exposure data and models cover the input for environmental distribution models in order to identify the potential (PES) and relevant exposure scenarios (RES) and, subsequently, the possible release routes, both with regard to which compartment(s) NMs are distributed in line with the factors determining the fate within environmental compartment. The initial outcome in the risk characterisation will be a generic Predicted Environmental Concentration (PEC), but a refined PEC can be obtained by applying specific exposure models for relevant media. The hazard information covers a variety of representative, relevant and reliable organisms and/or functions, relevant for the RES and enabling a hazard characterisation. The initial outcome will be hazard characterisation in test systems allowing estimating a Predicted No-Effect concentration (PNEC), either based on uncertainty factors or on a NM adapted version of the Species Sensitivity Distributions approach. The risk characterisation will either be based on a deterministic risk ratio approach (i.e., PEC/PNEC) or an overlay of probability distributions, i.e., exposure and hazard distributions, using the nano relevant models. PMID:29048395
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Probabilistic analysis of preload in the abutment screw of a dental implant complex.
Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R
2008-09-01
Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.
Comparative analysis through probability distributions of a data set
NASA Astrophysics Data System (ADS)
Cristea, Gabriel; Constantinescu, Dan Mihai
2018-02-01
In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.
Impact of temporal probability in 4D dose calculation for lung tumors.
Rouabhi, Ouided; Ma, Mingyu; Bayouth, John; Xia, Junyi
2015-11-08
The purpose of this study was to evaluate the dosimetric uncertainty in 4D dose calculation using three temporal probability distributions: uniform distribution, sinusoidal distribution, and patient-specific distribution derived from the patient respiratory trace. Temporal probability, defined as the fraction of time a patient spends in each respiratory amplitude, was evaluated in nine lung cancer patients. Four-dimensional computed tomography (4D CT), along with deformable image registration, was used to compute 4D dose incorporating the patient's respiratory motion. First, the dose of each of 10 phase CTs was computed using the same planning parameters as those used in 3D treatment planning based on the breath-hold CT. Next, deformable image registration was used to deform the dose of each phase CT to the breath-hold CT using the deformation map between the phase CT and the breath-hold CT. Finally, the 4D dose was computed by summing the deformed phase doses using their corresponding temporal probabilities. In this study, 4D dose calculated from the patient-specific temporal probability distribution was used as the ground truth. The dosimetric evaluation matrix included: 1) 3D gamma analysis, 2) mean tumor dose (MTD), 3) mean lung dose (MLD), and 4) lung V20. For seven out of nine patients, both uniform and sinusoidal temporal probability dose distributions were found to have an average gamma passing rate > 95% for both the lung and PTV regions. Compared with 4D dose calculated using the patient respiratory trace, doses using uniform and sinusoidal distribution showed a percentage difference on average of -0.1% ± 0.6% and -0.2% ± 0.4% in MTD, -0.2% ± 1.9% and -0.2% ± 1.3% in MLD, 0.09% ± 2.8% and -0.07% ± 1.8% in lung V20, -0.1% ± 2.0% and 0.08% ± 1.34% in lung V10, 0.47% ± 1.8% and 0.19% ± 1.3% in lung V5, respectively. We concluded that four-dimensional dose computed using either a uniform or sinusoidal temporal probability distribution can approximate four-dimensional dose computed using the patient-specific respiratory trace.
Heat conduction in periodic laminates with probabilistic distribution of material properties
NASA Astrophysics Data System (ADS)
Ostrowski, Piotr; Jędrysiak, Jarosław
2017-04-01
This contribution deals with a problem of heat conduction in a two-phase laminate made of periodically distributed micro-laminas along one direction. In general, the Fourier's Law describing the heat conduction in a considered composite has highly oscillating and discontinuous coefficients. Therefore, the tolerance averaging technique (cf. Woźniak et al. in Thermomechanics of microheterogeneous solids and structures. Monografie - Politechnika Łódzka, Wydawnictwo Politechniki Łódzkiej, Łódź, 2008) is applied. Based on this technique, the averaged differential equations for a tolerance-asymptotic model are derived and solved analytically for given initial-boundary conditions. The second part of this contribution is an investigation of the effect of material properties ratio ω of two components on the total temperature field θ, by the assumption that conductivities of micro-laminas are not necessary uniquely described. Numerical experiments (Monte Carlo simulation) are executed under assumption that ω is a random variable with a fixed probability distribution. At the end, based on the obtained results, a crucial hypothesis is formulated.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
Net present value probability distributions from decline curve reserves estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, D.E.; Huffman, C.H.; Thompson, R.S.
1995-12-31
This paper demonstrates how reserves probability distributions can be used to develop net present value (NPV) distributions. NPV probability distributions were developed from the rate and reserves distributions presented in SPE 28333. This real data study used practicing engineer`s evaluations of production histories. Two approaches were examined to quantify portfolio risk. The first approach, the NPV Relative Risk Plot, compares the mean NPV with the NPV relative risk ratio for the portfolio. The relative risk ratio is the NPV standard deviation (a) divided the mean ({mu}) NPV. The second approach, a Risk - Return Plot, is a plot of themore » {mu} discounted cash flow rate of return (DCFROR) versus the {sigma} for the DCFROR distribution. This plot provides a risk-return relationship for comparing various portfolios. These methods may help evaluate property acquisition and divestiture alternatives and assess the relative risk of a suite of wells or fields for bank loans.« less
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
NASA Astrophysics Data System (ADS)
Montecinos, Alejandra; Davis, Sergio; Peralta, Joaquín
2018-07-01
The kinematics and dynamics of deterministic physical systems have been a foundation of our understanding of the world since Galileo and Newton. For real systems, however, uncertainty is largely present via external forces such as friction or lack of precise knowledge about the initial conditions of the system. In this work we focus on the latter case and describe the use of inference methodologies in solving the statistical properties of classical systems subject to uncertain initial conditions. In particular we describe the application of the formalism of maximum entropy (MaxEnt) inference to the problem of projectile motion, given information about the average horizontal range over many realizations. By using MaxEnt we can invert the problem and use the provided information on the average range to reduce the original uncertainty in the initial conditions. Also, additional insight into the initial condition's probabilities, and the projectile path distribution itself, can be achieved based on the value of the average horizontal range. The wide applicability of this procedure, as well as its ease of use, reveals a useful tool with which to revisit a large number of physics problems, from classrooms to frontier research.
Coherent exciton transport in dendrimers and continuous-time quantum walks
NASA Astrophysics Data System (ADS)
Mülken, Oliver; Bierbaum, Veronika; Blumen, Alexander
2006-03-01
We model coherent exciton transport in dendrimers by continuous-time quantum walks. For dendrimers up to the second generation the coherent transport shows perfect recurrences when the initial excitation starts at the central node. For larger dendrimers, the recurrence ceases to be perfect, a fact which resembles results for discrete quantum carpets. Moreover, depending on the initial excitation site, we find that the coherent transport to certain nodes of the dendrimer has a very low probability. When the initial excitation starts from the central node, the problem can be mapped onto a line which simplifies the computational effort. Furthermore, the long time average of the quantum mechanical transition probabilities between pairs of nodes shows characteristic patterns and allows us to classify the nodes into clusters with identical limiting probabilities. For the (space) average of the quantum mechanical probability to be still or to be again at the initial site, we obtain, based on the Cauchy-Schwarz inequality, a simple lower bound which depends only on the eigenvalue spectrum of the Hamiltonian.
Kennedy, Paula L; Woodbury, Allan D
2002-01-01
In ground water flow and transport modeling, the heterogeneous nature of porous media has a considerable effect on the resulting flow and solute transport. Some method of generating the heterogeneous field from a limited dataset of uncertain measurements is required. Bayesian updating is one method that interpolates from an uncertain dataset using the statistics of the underlying probability distribution function. In this paper, Bayesian updating was used to determine the heterogeneous natural log transmissivity field for a carbonate and a sandstone aquifer in southern Manitoba. It was determined that the transmissivity in m2/sec followed a natural log normal distribution for both aquifers with a mean of -7.2 and - 8.0 for the carbonate and sandstone aquifers, respectively. The variograms were calculated using an estimator developed by Li and Lake (1994). Fractal nature was not evident in the variogram from either aquifer. The Bayesian updating heterogeneous field provided good results even in cases where little data was available. A large transmissivity zone in the sandstone aquifer was created by the Bayesian procedure, which is not a reflection of any deterministic consideration, but is a natural outcome of updating a prior probability distribution function with observations. The statistical model returns a result that is very reasonable; that is homogeneous in regions where little or no information is available to alter an initial state. No long range correlation trends or fractal behavior of the log-transmissivity field was observed in either aquifer over a distance of about 300 km.
Versino, Daniele; Bronkhorst, Curt Allan
2018-01-31
The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less
Exact sampling hardness of Ising spin models
NASA Astrophysics Data System (ADS)
Fefferman, B.; Foss-Feig, M.; Gorshkov, A. V.
2017-09-01
We study the complexity of classically sampling from the output distribution of an Ising spin model, which can be implemented naturally in a variety of atomic, molecular, and optical systems. In particular, we construct a specific example of an Ising Hamiltonian that, after time evolution starting from a trivial initial state, produces a particular output configuration with probability very nearly proportional to the square of the permanent of a matrix with arbitrary integer entries. In a similar spirit to boson sampling, the ability to sample classically from the probability distribution induced by time evolution under this Hamiltonian would imply unlikely complexity theoretic consequences, suggesting that the dynamics of such a spin model cannot be efficiently simulated with a classical computer. Physical Ising spin systems capable of achieving problem-size instances (i.e., qubit numbers) large enough so that classical sampling of the output distribution is classically difficult in practice may be achievable in the near future. Unlike boson sampling, our current results only imply hardness of exact classical sampling, leaving open the important question of whether a much stronger approximate-sampling hardness result holds in this context. The latter is most likely necessary to enable a convincing experimental demonstration of quantum supremacy. As referenced in a recent paper [A. Bouland, L. Mancinska, and X. Zhang, in Proceedings of the 31st Conference on Computational Complexity (CCC 2016), Leibniz International Proceedings in Informatics (Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Dagstuhl, 2016)], our result completes the sampling hardness classification of two-qubit commuting Hamiltonians.
A single population of red globular clusters around the massive compact galaxy NGC 1277
NASA Astrophysics Data System (ADS)
Beasley, Michael A.; Trujillo, Ignacio; Leaman, Ryan; Montes, Mireia
2018-03-01
Massive galaxies are thought to form in two phases: an initial collapse of gas and giant burst of central star formation, followed by the later accretion of material that builds up their stellar and dark-matter haloes. The systems of globular clusters within such galaxies are believed to form in a similar manner. The initial central burst forms metal-rich (spectrally red) clusters, whereas more metal-poor (spectrally blue) clusters are brought in by the later accretion of less-massive satellites. This formation process is thought to result in the multimodal optical colour distributions that are seen in the globular cluster systems of massive galaxies. Here we report optical observations of the massive relic-galaxy candidate NGC 1277—a nearby, un-evolved example of a high-redshift ‘red nugget’ galaxy. We find that the optical colour distribution of the cluster system of NGC 1277 is unimodal and entirely red. This finding is in strong contrast to other galaxies of similar and larger stellar mass, the cluster systems of which always exhibit (and are generally dominated by) blue clusters. We argue that the colour distribution of the cluster system of NGC 1277 indicates that the galaxy has undergone little (if any) mass accretion after its initial collapse, and use simulations of possible merger histories to show that the stellar mass due to accretion is probably at most ten per cent of the total stellar mass of the galaxy. These results confirm that NGC 1277 is a genuine relic galaxy and demonstrate that blue clusters constitute an accreted population in present-day massive galaxies.
A single population of red globular clusters around the massive compact galaxy NGC 1277.
Beasley, Michael A; Trujillo, Ignacio; Leaman, Ryan; Montes, Mireia
2018-03-22
Massive galaxies are thought to form in two phases: an initial collapse of gas and giant burst of central star formation, followed by the later accretion of material that builds up their stellar and dark-matter haloes. The systems of globular clusters within such galaxies are believed to form in a similar manner. The initial central burst forms metal-rich (spectrally red) clusters, whereas more metal-poor (spectrally blue) clusters are brought in by the later accretion of less-massive satellites. This formation process is thought to result in the multimodal optical colour distributions that are seen in the globular cluster systems of massive galaxies. Here we report optical observations of the massive relic-galaxy candidate NGC 1277-a nearby, un-evolved example of a high-redshift 'red nugget' galaxy. We find that the optical colour distribution of the cluster system of NGC 1277 is unimodal and entirely red. This finding is in strong contrast to other galaxies of similar and larger stellar mass, the cluster systems of which always exhibit (and are generally dominated by) blue clusters. We argue that the colour distribution of the cluster system of NGC 1277 indicates that the galaxy has undergone little (if any) mass accretion after its initial collapse, and use simulations of possible merger histories to show that the stellar mass due to accretion is probably at most ten per cent of the total stellar mass of the galaxy. These results confirm that NGC 1277 is a genuine relic galaxy and demonstrate that blue clusters constitute an accreted population in present-day massive galaxies.
Investigation of the delay time distribution of high power microwave surface flashover
NASA Astrophysics Data System (ADS)
Foster, J.; Krompholz, H.; Neuber, A.
2011-01-01
Characterizing and modeling the statistics associated with the initiation of gas breakdown has proven to be difficult due to a variety of rather unexplored phenomena involved. Experimental conditions for high power microwave window breakdown for pressures on the order of 100 to several 100 torr are complex: there are little to no naturally occurring free electrons in the breakdown region. The initial electron generation rate, from an external source, for example, is time dependent and so is the charge carrier amplification in the increasing radio frequency (RF) field amplitude with a rise time of 50 ns, which can be on the same order as the breakdown delay time. The probability of reaching a critical electron density within a given time period is composed of the statistical waiting time for the appearance of initiating electrons in the high-field region and the build-up of an avalanche with an inherent statistical distribution of the electron number. High power microwave breakdown and its delay time is of critical importance, since it limits the transmission through necessary windows, especially for high power, high altitude, low pressure applications. The delay time distribution of pulsed high power microwave surface flashover has been examined for nitrogen and argon as test gases for pressures ranging from 60 to 400 torr, with and without external UV illumination. A model has been developed for predicting the discharge delay time for these conditions. The results provide indications that field induced electron generation, other than standard field emission, plays a dominant role, which might be valid for other gas discharge types as well.
de Jong, Peter W; Hemerik, Lia; Gort, Gerrit; van Alphen, Jacques J M
2011-01-01
Females of the larval parasitoid of Drosophila, Asobara citri, from sub-Saharan Africa, defend patches with hosts by fighting and chasing conspecific females upon encounter. Females of the closely related, palearctic species Asobara tabida do not defend patches and often search simultaneously in the same patch. The effect of patch defence by A. citri females on their distribution in a multi-patch environment was investigated, and their distributions were compared with those of A. tabida. For both species 20 females were released from two release-points in replicate experiments. Females of A. citri quickly reached a regular distribution across 16 patches, with a small variance/mean ratio per patch. Conversely, A. tabida females initially showed a clumped distribution, and after gradual dispersion, a more Poisson-like distribution across patches resulted (variance/mean ratio was closer to 1 and higher than for A. citri). The dispersion of A. tabida was most probably an effect of exploitation: these parasitoids increasingly made shorter visits to already exploited patches. We briefly discuss hypotheses on the adaptive significance of patch defence behaviour or its absence in the light of differences in the natural history of both parasitoid species, notably the spatial distribution of their hosts.
NASA Astrophysics Data System (ADS)
Mahanti, P.; Robinson, M. S.; Boyd, A. K.
2013-12-01
Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was computed over multiple scales. This slope analysis showed that local slope distributions are non-Gaussian for both crater walls and floors. Over larger baselines (~100 meters), crater wall slope probability distributions do approximate Gaussian distributions better, but have long distribution tails. Crater floor probability distributions however, were always asymmetric (for the baseline scales analyzed) and less affected by baseline scale variations. Accordingly, our results suggest that use of long tailed probability distributions (like Cauchy) and a baseline-dependant multi-scale model can be more effective in describing the slope statistics for lunar topography. Refrences: [1]Moore, H.(1971), JGR,75(11) [2]Marcus, A. H.(1969),JGR,74 (22).[3]R.J. Pike (1970),U.S. Geological Survey Working Paper [4]N. C. Costes, J. E. Farmer and E. B. George (1972),NASA Technical Report TR R-401 [5]M. N. Parker and G. L. Tyler(1973), Radio Science, 8(3),177-184 [6]Alekseev, V. A.et al (1968), Soviet Astronomy, Vol. 11, p.860 [7]Burns et al. (2012) Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXIX-B4, 483-488.[8]Smith et al. (2010) GRL 37, L18204, DOI: 10.1029/2010GL043751. [9]Wagner R., Robinson, M., Speyerer E., Mahanti, P., LPSC 2013, #2924.
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Vincent, Brett T.
1993-01-01
The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.
Probability and the changing shape of response distributions for orientation.
Anderson, Britt
2014-11-18
Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
Joint probabilities and quantum cognition
NASA Astrophysics Data System (ADS)
de Barros, J. Acacio
2012-12-01
In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.
Zhuang, Jiancang; Ogata, Yosihiko
2006-04-01
The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.
A simplified model for the assessment of the impact probability of fragments.
Gubinelli, Gianfilippo; Zanelli, Severino; Cozzani, Valerio
2004-12-31
A model was developed for the assessment of fragment impact probability on a target vessel, following the collapse and fragmentation of a primary vessel due to internal pressure. The model provides the probability of impact of a fragment with defined shape, mass and initial velocity on a target of a known shape and at a given position with respect to the source point. The model is based on the ballistic analysis of the fragment trajectory and on the determination of impact probabilities by the analysis of initial direction of fragment flight. The model was validated using available literature data.
A Time-Dependent Quantum Dynamics Study of the H2 + CH3 yields H + CH4 Reaction
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
We present a time-dependent wave-packet propagation calculation for the H2 + CH3 yields H + CH4 reaction in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probability for different initial rotational-vibrational states are presented in this study. The cumulative reaction probability (CRP) is obtained by summing over initial-state-selected reaction probability. The energy-shift approximation to account for the contribution of degrees of freedom missing in the 6D calculation is employed to obtain an approximate full-dimensional CRP. Thermal rate constant is compared with different experiment results.
Net present value approaches for drug discovery.
Svennebring, Andreas M; Wikberg, Jarl Es
2013-12-01
Three dedicated approaches to the calculation of the risk-adjusted net present value (rNPV) in drug discovery projects under different assumptions are suggested. The probability of finding a candidate drug suitable for clinical development and the time to the initiation of the clinical development is assumed to be flexible in contrast to the previously used models. The rNPV of the post-discovery cash flows is calculated as the probability weighted average of the rNPV at each potential time of initiation of clinical development. Practical considerations how to set probability rates, in particular during the initiation and termination of a project is discussed.
Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet
2016-12-01
Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
Response statistics of rotating shaft with non-linear elastic restoring forces by path integration
NASA Astrophysics Data System (ADS)
Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael
2017-07-01
Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.
Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; ...
2016-12-12
It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total numbermore » of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.« less
Intelligent Planning and Scheduling for Controlled Life Support Systems
NASA Technical Reports Server (NTRS)
Leon, V. Jorge
1996-01-01
Planning in Controlled Ecological Life Support Systems (CELSS) requires special look ahead capabilities due to the complex and long-term dynamic behavior of biological systems. This project characterizes the behavior of CELSS, identifies the requirements of intelligent planning systems for CELSS, proposes the decomposition of the planning task into short-term and long-term planning, and studies the crop scheduling problem as an initial approach to long-term planning. CELSS is studied in the realm of Chaos. The amount of biomass in the system is modeled using a bounded quadratic iterator. The results suggests that closed ecological systems can exhibit periodic behavior when imposed external or artificial control. The main characteristics of CELSS from the planning and scheduling perspective are discussed and requirements for planning systems are given. Crop scheduling problem is identified as an important component of the required long-term lookahead capabilities of a CELSS planner. The main characteristics of crop scheduling are described and a model is proposed to represent the problem. A surrogate measure of the probability of survival is developed. The measure reflects the absolute deviation of the vital reservoir levels from their nominal values. The solution space is generated using a probability distribution which captures both knowledge about the system and the current state of affairs at each decision epoch. This probability distribution is used in the context of an evolution paradigm. The concepts developed serve as the basis for the development of a simple crop scheduling tool which is used to demonstrate its usefulness in the design and operation of CELSS.
Phase transitions in community detection: A solvable toy model
NASA Astrophysics Data System (ADS)
Ver Steeg, Greg; Moore, Cristopher; Galstyan, Aram; Allahverdyan, Armen
2014-05-01
Recently, it was shown that there is a phase transition in the community detection problem. This transition was first computed using the cavity method, and has been proved rigorously in the case of q = 2 groups. However, analytic calculations using the cavity method are challenging since they require us to understand probability distributions of messages. We study analogous transitions in the so-called “zero-temperature inference” model, where this distribution is supported only on the most likely messages. Furthermore, whenever several messages are equally likely, we break the tie by choosing among them with equal probability, corresponding to an infinitesimal random external field. While the resulting analysis overestimates the thresholds, it reproduces some of the qualitative features of the system. It predicts a first-order detectability transition whenever q > 2 (as opposed to q > 4 according to the finite-temperature cavity method). It also has a regime analogous to the “hard but detectable” phase, where the community structure can be recovered, but only when the initial messages are sufficiently accurate. Finally, we study a semisupervised setting where we are given the correct labels for a fraction ρ of the nodes. For q > 2, we find a regime where the accuracy jumps discontinuously at a critical value of ρ.
Decay channels of Al L sub 2,3 excitons and the absence of O K excitons in. alpha. -Al sub 2 O sub 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, W.L.; Jia, J.; Dong, Q.
1991-12-15
The Al {ital L}{sub 2,3} and O {ital K} thresholds for single-crystal {alpha}-Al{sub 2}O{sub 3} have been studied by photoemission. Energy-distribution curves, constant-initial-state (CIS), and constant-final-state (CFS) spectra are reported and compared to the absorption spectrum reported previously. An exciton appears as a doublet at threshold in the Al {ital L}{sub 2,3} CFS, CIS, and absorption spectra. The details of the Al {ital L}{sub 2,3} CFS spectrum and absorption spectrum are similar, while the exciton is the only feature present in the CIS spectrum. Comparisons of the various Al {ital L}{sub 2,3} spectra allow the probabilities of different exciton decaymore » channels to be determined. The probability for nonradiative direct recombination of the exciton is found to be (8{plus minus}1)% and the probability for Auger decay of the exciton is found to be (72{plus minus}20)%. Comparisons of the O {ital K} CIS and CFS spectra suggest that no O {ital K} exciton is formed.« less
Modeling uncertainty in producing natural gas from tight sands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chermak, J.M.; Dahl, C.A.; Patrick, R.H
1995-12-31
Since accurate geologic, petroleum engineering, and economic information are essential ingredients in making profitable production decisions for natural gas, we combine these ingredients in a dynamic framework to model natural gas reservoir production decisions. We begin with the certainty case before proceeding to consider how uncertainty might be incorporated in the decision process. Our production model uses dynamic optimal control to combine economic information with geological constraints to develop optimal production decisions. To incorporate uncertainty into the model, we develop probability distributions on geologic properties for the population of tight gas sand wells and perform a Monte Carlo study tomore » select a sample of wells. Geological production factors, completion factors, and financial information are combined into the hybrid economic-petroleum reservoir engineering model to determine the optimal production profile, initial gas stock, and net present value (NPV) for an individual well. To model the probability of the production abandonment decision, the NPV data is converted to a binary dependent variable. A logit model is used to model this decision as a function of the above geological and economic data to give probability relationships. Additional ways to incorporate uncertainty into the decision process include confidence intervals and utility theory.« less
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
Quantitative assessment of building fire risk to life safety.
Guanquan, Chu; Jinhua, Sun
2008-06-01
This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
On probability-possibility transformations
NASA Technical Reports Server (NTRS)
Klir, George J.; Parviz, Behzad
1992-01-01
Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
Newton/Poisson-Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.
1990-01-01
NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.
Competing contact processes in the Watts-Strogatz network
NASA Astrophysics Data System (ADS)
Rybak, Marcin; Malarz, Krzysztof; Kułakowski, Krzysztof
2016-06-01
We investigate two competing contact processes on a set of Watts-Strogatz networks with the clustering coefficient tuned by rewiring. The base for network construction is one-dimensional chain of N sites, where each site i is directly linked to nodes labelled as i ± 1 and i ± 2. So initially, each node has the same degree k i = 4. The periodic boundary conditions are assumed as well. For each node i the links to sites i + 1 and i + 2 are rewired to two randomly selected nodes so far not-connected to node i. An increase of the rewiring probability q influences the nodes degree distribution and the network clusterization coefficient 𝓒. For given values of rewiring probability q the set 𝓝(q)={𝓝1,𝓝2,...,𝓝 M } of M networks is generated. The network's nodes are decorated with spin-like variables s i ∈ { S,D }. During simulation each S node having a D-site in its neighbourhood converts this neighbour from D to S state. Conversely, a node in D state having at least one neighbour also in state D-state converts all nearest-neighbours of this pair into D-state. The latter is realized with probability p. We plot the dependence of the nodes S final density n S T on initial nodes S fraction n S 0. Then, we construct the surface of the unstable fixed points in (𝓒, p, n S 0) space. The system evolves more often toward n S T for (𝓒, p, n S 0) points situated above this surface while starting simulation with (𝓒, p, n S 0) parameters situated below this surface leads system to n S T =0. The points on this surface correspond to such value of initial fraction n S * of S nodes (for fixed values 𝓒 and p) for which their final density is n S T=1/2.
Stellar mass spectrum within massive collapsing clumps. I. Influence of the initial conditions
NASA Astrophysics Data System (ADS)
Lee, Yueh-Ning; Hennebelle, Patrick
2018-04-01
Context. Stars constitute the building blocks of our Universe, and their formation is an astrophysical problem of great importance. Aim. We aim to understand the fragmentation of massive molecular star-forming clumps and the effect of initial conditions, namely the density and the level of turbulence, on the resulting distribution of stars. For this purpose, we conduct numerical experiments in which we systematically vary the initial density over four orders of magnitude and the turbulent velocity over a factor ten. In a companion paper, we investigate the dependence of this distribution on the gas thermodynamics. Methods: We performed a series of hydrodynamical numerical simulations using adaptive mesh refinement, with special attention to numerical convergence. We also adapted an existing analytical model to the case of collapsing clouds by employing a density probability distribution function (PDF) ∝ρ-1.5 instead of a lognormal distribution. Results: Simulations and analytical model both show two support regimes, a thermally dominated regime and a turbulence-dominated regime. For the first regime, we infer that dN/d logM ∝ M0, while for the second regime, we obtain dN/d logM ∝ M-3/4. This is valid up to about ten times the mass of the first Larson core, as explained in the companion paper, leading to a peak of the mass spectrum at 0.2 M⊙. From this point, the mass spectrum decreases with decreasing mass except for the most diffuse clouds, where disk fragmentation leads to the formation of objects down to the mass of the first Larson core, that is, to a few 10-2 M⊙. Conclusions: Although the mass spectra we obtain for the most compact clouds qualitatively resemble the observed initial mass function, the distribution exponent is shallower than the expected Salpeter exponent of - 1.35. Nonetheless, we observe a possible transition toward a slightly steeper value that is broadly compatible with the Salpeter exponent for masses above a few solar masses. This change in behavior is associated with the change in density PDF, which switches from a power-law to a lognormal distribution. Our results suggest that while gravitationally induced fragmentation could play an important role for low masses, it is likely the turbulently induced fragmentation that leads to the Salpeter exponent.
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. © 2012 Optical Society of America
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
Northern peatland initiation lagged abrupt increases in deglacial atmospheric CH4.
Reyes, Alberto V; Cooke, Colin A
2011-03-22
Peatlands are a key component of the global carbon cycle. Chronologies of peatland initiation are typically based on compiled basal peat radiocarbon (14C) dates and frequency histograms of binned calibrated age ranges. However, such compilations are problematic because poor quality 14C dates are commonly included and because frequency histograms of binned age ranges introduce chronological artefacts that bias the record of peatland initiation. Using a published compilation of 274 basal 14C dates from Alaska as a case study, we show that nearly half the 14C dates are inappropriate for reconstructing peatland initiation, and that the temporal structure of peatland initiation is sensitive to sampling biases and treatment of calibrated 14C dates. We present revised chronologies of peatland initiation for Alaska and the circumpolar Arctic based on summed probability distributions of calibrated 14C dates. These revised chronologies reveal that northern peatland initiation lagged abrupt increases in atmospheric CH4 concentration at the start of the Bølling-Allerød interstadial (Termination 1A) and the end of the Younger Dryas chronozone (Termination 1B), suggesting that northern peatlands were not the primary drivers of the rapid increases in atmospheric CH4. Our results demonstrate that subtle methodological changes in the synthesis of basal 14C ages lead to substantially different interpretations of temporal trends in peatland initiation, with direct implications for the role of peatlands in the global carbon cycle.
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. © 2012 Society for Risk Analysis.
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu
In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less
NASA Astrophysics Data System (ADS)
Jenkins, Colleen; Jordan, Jay; Carlson, Jeff
2007-02-01
This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.
Rogue waves and large deviations in deep sea.
Dematteis, Giovanni; Grafke, Tobias; Vanden-Eijnden, Eric
2018-01-30
The appearance of rogue waves in deep sea is investigated by using the modified nonlinear Schrödinger (MNLS) equation in one spatial dimension with random initial conditions that are assumed to be normally distributed, with a spectrum approximating realistic conditions of a unidirectional sea state. It is shown that one can use the incomplete information contained in this spectrum as prior and supplement this information with the MNLS dynamics to reliably estimate the probability distribution of the sea surface elevation far in the tail at later times. Our results indicate that rogue waves occur when the system hits unlikely pockets of wave configurations that trigger large disturbances of the surface height. The rogue wave precursors in these pockets are wave patterns of regular height, but with a very specific shape that is identified explicitly, thereby allowing for early detection. The method proposed here combines Monte Carlo sampling with tools from large deviations theory that reduce the calculation of the most likely rogue wave precursors to an optimization problem that can be solved efficiently. This approach is transferable to other problems in which the system's governing equations contain random initial conditions and/or parameters.
NASA Astrophysics Data System (ADS)
Ostapenko, N. S.; Neroda, O. N.
2016-05-01
The paper discusses factors in the deposition and concentration of native gold and the spatial distribution of its individuals within the sufide-poor gold-quartz veins at the mesoabyssal Tokur deposit. The major factors in deposition of gold were sealing of the hydrothermal system, a sudden drop in fluid pressure, and repeated immiscibility in the fluid. Native gold was deposited in relation to initial acts of prolonged and discrete opening and preopening of cavities in three mineral assemblages of the productive association II. Most native gold individuals with a visible size of 0.1-1.5 mm were together with the early generation of quartz 2 on cavity walls adjacent to altered rocks. This is caused by the high content of Au complexes in initial hydrothermal solutions favoring rapid oversaturation during cavity formation. Gold fills interstices between grains of quartz 2 throughout the deposit and mineral assemblages. The vertical-flow distribution of gold has been established in economic veins; the upper and middle levels are enriched in gold, and samples with the greatest gold grade of 100-500 g/t or higher are concentrated there. This is caused both by the predominance of mineral association II at these levels and probable natural flotation of gold grains contained in the gold-gas associate for immiscibility of the hydrothermal fluid at the second stage of the ore-forming process.
p-adic stochastic hidden variable model
NASA Astrophysics Data System (ADS)
Khrennikov, Andrew
1998-03-01
We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.
NASA Astrophysics Data System (ADS)
Ferwerda, Cameron; Lipan, Ovidiu
2016-11-01
Akin to electric circuits, we construct biocircuits that are manipulated by cutting and assembling channels through which stochastic information flows. This diagrammatic manipulation allows us to create a method which constructs networks by joining building blocks selected so that (a) they cover only basic processes; (b) it is scalable to large networks; (c) the mean and variance-covariance from the Pauli master equation form a closed system; and (d) given the initial probability distribution, no special boundary conditions are necessary to solve the master equation. The method aims to help with both designing new synthetic signaling pathways and quantifying naturally existing regulatory networks.
Bayesian approach to analyzing holograms of colloidal particles.
Dimiduk, Thomas G; Manoharan, Vinothan N
2016-10-17
We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
NASA Astrophysics Data System (ADS)
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
NASA Astrophysics Data System (ADS)
Lee, Jaeha; Tsutsui, Izumi
2017-05-01
We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.
NASA Instrument Cost/Schedule Model
NASA Technical Reports Server (NTRS)
Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George
2011-01-01
NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.
Non-classicality of the molecular vibrations assisting exciton energy transfer at room temperature
O’Reilly, Edward J.; Olaya-Castro, Alexandra
2014-01-01
Advancing the debate on quantum effects in light-initiated reactions in biology requires clear identification of non-classical features that these processes can exhibit and utilize. Here we show that in prototype dimers present in a variety of photosynthetic antennae, efficient vibration-assisted energy transfer in the sub-picosecond timescale and at room temperature can manifest and benefit from non-classical fluctuations of collective pigment motions. Non-classicality of initially thermalized vibrations is induced via coherent exciton–vibration interactions and is unambiguously indicated by negativities in the phase–space quasi-probability distribution of the effective collective mode coupled to the electronic dynamics. These quantum effects can be prompted upon incoherent input of excitation. Our results therefore suggest that investigation of the non-classical properties of vibrational motions assisting excitation and charge transport, photoreception and chemical sensing processes could be a touchstone for revealing a role for non-trivial quantum phenomena in biology. PMID:24402469
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEneaney, William M.
2004-08-15
Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information.A function of a conditional probability distribution is proposed as an information state.In the problem form here, the payoff is only a function of the terminal state of the system,and the initial information state is either linear ora sum of max-plus delta functions.When the initial information state belongs to these classes, its propagation is finite-dimensional.The state feedback value function is also finite-dimensional,and obtained via dynamic programming,but has a nonstandard form due to the necessity ofmore » an expanded state variable.Under a saddle point assumption,Certainty Equivalence is obtained and the proposed function is indeed an information state.« less
Self-narrowing of size distributions of nanostructures by nucleation antibunching
NASA Astrophysics Data System (ADS)
Glas, Frank; Dubrovskii, Vladimir G.
2017-08-01
We study theoretically the size distributions of ensembles of nanostructures fed from a nanosize mother phase or a nanocatalyst that contains a limited number of the growth species that form each nanostructure. In such systems, the nucleation probability decreases exponentially after each nucleation event, leading to the so-called nucleation antibunching. Specifically, this effect has been observed in individual nanowires grown in the vapor-liquid-solid mode and greatly affects their properties. By performing numerical simulations over large ensembles of nanostructures as well as developing two different analytical schemes (a discrete and a continuum approach), we show that nucleation antibunching completely suppresses fluctuation-induced broadening of the size distribution. As a result, the variance of the distribution saturates to a time-independent value instead of growing infinitely with time. The size distribution widths and shapes primarily depend on the two parameters describing the degree of antibunching and the nucleation delay required to initiate the growth. The resulting sub-Poissonian distributions are highly desirable for improving size homogeneity of nanowires. On a more general level, this unique self-narrowing effect is expected whenever the growth rate is regulated by a nanophase which is able to nucleate an island much faster than it is refilled from a surrounding macroscopic phase.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Universal laws of human society's income distribution
NASA Astrophysics Data System (ADS)
Tao, Yong
2015-10-01
General equilibrium equations in economics play the same role with many-body Newtonian equations in physics. Accordingly, each solution of the general equilibrium equations can be regarded as a possible microstate of the economic system. Since Arrow's Impossibility Theorem and Rawls' principle of social fairness will provide a powerful support for the hypothesis of equal probability, then the principle of maximum entropy is available in a just and equilibrium economy so that an income distribution will occur spontaneously (with the largest probability). Remarkably, some scholars have observed such an income distribution in some democratic countries, e.g. USA. This result implies that the hypothesis of equal probability may be only suitable for some "fair" systems (economic or physical systems). From this meaning, the non-equilibrium systems may be "unfair" so that the hypothesis of equal probability is unavailable.
Polynomial chaos representation of databases on manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2017-04-15
Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less
Gravitational lensing, time delay, and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1992-01-01
The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.
The influence of persuasion in opinion formation and polarization
NASA Astrophysics Data System (ADS)
La Rocca, C. E.; Braunstein, L. A.; Vazquez, F.
2014-05-01
We present a model that explores the influence of persuasion in a population of agents with positive and negative opinion orientations. The opinion of each agent is represented by an integer number k that expresses its level of agreement on a given issue, from totally against k=-M to totally in favor k = M. Same-orientation agents persuade each other with probability p, becoming more extreme, while opposite-orientation agents become more moderate as they reach a compromise with probability q. The population initially evolves to (a) a polarized state for r=p/q\\gt 1 , where opinions' distribution is peaked at the extreme values k=+/- M , or (b) a centralized state for r < 1, with most opinions around k=+/- 1 . When r \\gg 1 , polarization lasts for a time that diverges as r^M \\ln N , where N is the population's size. Finally, an extremist consensus (k = M or -M ) is reached in a time that scales as r^{-1} for r \\ll 1 .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruderman, M.
1984-09-01
The youngest known radiopulsar in the rapidly spinning magnetized neutron star which powers the Crab Nebula, the remnant of the historical supernova explosion of 1054 AD. Similar neutron stars are probably born at least every few hundred years, but are less frequent than Galactic supernova explosions. They are initially sources of extreme relativistic electron and/or positron winds (approx.10/sup 38/s/sup -1/ of 10/sup 12/ eV leptons) which greatly decrease as the neutron stars spin down to become mature pulsars. After several million years these neutron stars are no longer observed as radiopulsars, perhaps because of large magnetic field decay. However, amore » substantial fraction of the 10/sup 8/ old dead pulsars in the Galaxy are the most probable source for the isotropically distributed ..gamma..-ray burst detected several times per week at the earth. Some old neutron stars are spun-up by accretion from companions to be resurrected as rapidly spinning low magnetic field radiopulsars. 52 references, 6 figures, 3 tables.« less
Mutant number distribution in an exponentially growing population
NASA Astrophysics Data System (ADS)
Keller, Peter; Antal, Tibor
2015-01-01
We present an explicit solution to a classic model of cell-population growth introduced by Luria and Delbrück (1943 Genetics 28 491-511) 70 years ago to study the emergence of mutations in bacterial populations. In this model a wild-type population is assumed to grow exponentially in a deterministic fashion. Proportional to the wild-type population size, mutants arrive randomly and initiate new sub-populations of mutants that grow stochastically according to a supercritical birth and death process. We give an exact expression for the generating function of the total number of mutants at a given wild-type population size. We present a simple expression for the probability of finding no mutants, and a recursion formula for the probability of finding a given number of mutants. In the ‘large population-small mutation’ limit we recover recent results of Kessler and Levine (2014 J. Stat. Phys. doi:10.1007/s10955-014-1143-3) for a fully stochastic version of the process.
Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise.
Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang
2017-02-14
In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α -stable (S α S) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf).
The first-digit frequencies in data of turbulent flows
NASA Astrophysics Data System (ADS)
Biau, Damien
2015-12-01
Considering the first significant digits (noted d) in data sets of dissipation for turbulent flows, the probability to find a given number (d = 1 or 2 or …9) would be 1/9 for a uniform distribution. Instead the probability closely follows Newcomb-Benford's law, namely P(d) = log(1 + 1 / d) . The discrepancies between Newcomb-Benford's law and first-digits frequencies in turbulent data are analysed through Shannon's entropy. The data sets are obtained with direct numerical simulations for two types of fluid flow: an isotropic case initialized with a Taylor-Green vortex and a channel flow. Results are in agreement with Newcomb-Benford's law in nearly homogeneous cases and the discrepancies are related to intermittent events. Thus the scale invariance for the first significant digits, which supports Newcomb-Benford's law, seems to be related to an equilibrium turbulent state, namely with a significant inertial range. A matlab/octave program provided in appendix is such that part of the presented results can easily be replicated.
Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise
Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang
2017-01-01
In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α-stable (SαS) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf). PMID:28216590
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration. © 2013 Elsevier B.V. All rights reserved.
Theoretical size distribution of fossil taxa: analysis of a null model
Reed, William J; Hughes, Barry D
2007-01-01
Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249
On the application of a hairpin vortex model of wall turbulence to trailing edge noise prediction
NASA Technical Reports Server (NTRS)
Liu, N. S.; Shamroth, S. J.
1985-01-01
The goal is to develop a technique via a hairpin vortex model of the turbulent boundary layer, which would lead to the estimation of the aerodynamic input for use in trailing edge noise prediction theories. The work described represents an initial step in reaching this goal. The hairpin vortex is considered as the underlying structure of the wall turbulence and the turbulent boundary layer is viewed as an ensemble of typical hairpin vortices of different sizes. A synthesis technique is examined which links the mean flow and various turbulence quantities via these typical vortices. The distribution of turbulence quantities among vortices of different scales follows directly from the probability distribution needed to give the measured mean flow vorticity. The main features of individual representative hairpin vortices are discussed in detail and a preliminary assessment of the synthesis approach is made.
NASA Astrophysics Data System (ADS)
Hernández Vera, Mario; Wester, Roland; Gianturco, Francesco Antonio
2018-01-01
We construct the velocity map images of the proton transfer reaction between helium and molecular hydrogen ion {{{H}}}2+. We perform simulations of imaging experiments at one representative total collision energy taking into account the inherent aberrations of the velocity mapping in order to explore the feasibility of direct comparisons between theory and future experiments planned in our laboratory. The asymptotic angular distributions of the fragments in a 3D velocity space is determined from the quantum state-to-state differential reactive cross sections and reaction probabilities which are computed by using the time-independent coupled channel hyperspherical coordinate method. The calculations employ an earlier ab initio potential energy surface computed at the FCI/cc-pVQZ level of theory. The present simulations indicate that the planned experiments would be selective enough to differentiate between product distributions resulting from different initial internal states of the reactants.
Ellipticity-dependent of multiple ionisation methyl iodide cluster using 532 nm nanosecond laser
NASA Astrophysics Data System (ADS)
Tang, Bin; Zhao, Wuduo; Wang, Weiguo; Hua, Lei; Chen, Ping; Hou, Keyong; Huang, Yunguang; Li, Haiyang
2016-03-01
The dependence of multiply charged ions on laser ellipticity in methyl iodide clusters with 532 nm nanosecond laser was measured using a time-of-flight mass spectrometer. The intensities of multiply charged ions Iq+(q = 2-4) with circularly polarised laser pulse were clearly higher than those with linearly polarised laser pulse but the intensity of single charged ions I+ was inverse. And the dependences of ions on the optical polarisation state were investigated and a flower petal and square distribution for single charged ions (I+, C+) and multiply charged ions (I2+, I3+, I4+, C2+) were observed, respectively. A theoretical calculation was also proposed to simulate the distributions of ions and theoretical results fitted well with the experimental ones. It indicated that the high multiphoton ionisation probability in the initial stage would result in the disintegration of big clusters into small ones and suppress the production of multiply charged ions.
Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.
Gao, Xiang; Acar, Levent
2016-07-04
This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.
Johnston, Stephen S; Juday, Timothy; Seekins, Daniel; Espindle, Derek; Chu, Bong-Chul
2012-03-01
In treatment of human immunodeficiency virus (HIV), high levels of adherence to combination antiretroviral therapy (cART) are required to prevent failure of virologic suppression, development of drug resistance, and permanent loss of therapeutic options. No published research has assessed the association between cART prescription cost sharing and adherence to cART. To analyze the association between cART prescription cost sharing and adherence to initial cART in commercially insured antiretroviral (ARV)-naïve patients with HIV. This retrospective observational cohort study used 2002-2008 data from a large U.S. claims database of more than 56 million commercially insured individuals. Study subjects were patients aged 18 years or older who initiated cART during the period January 1, 2003, to December 31, 2007, had no ARV claims during the 6-month period prior to the initiation date, and had at least 1 ICD-9-CM diagnosis code for HIV infection (042, 795.71, V08) from 12 months before to 12 months after cART initiation. A minimum 12-month period of continuous enrollment after cART initiation was used to construct a patient-quarter repeated measures panel dataset in which each quarter of data that a patient contributed represented an observation. The evaluation period extended from cART initiation until the occurrence of 1 of the following events: addition of an ARV that was not part of the initial cART regimen, 30-day gap in possession of an ARV within the initiated cART regimen, hospitalization of 30 or more days, loss to follow-up due to study end (December 31, 2008), or disenrollment. The study's outcome was quarterly adherence to cART, defined as the number of days within the quarter that a patient possessed all components of the initial cART regimen. Each patient's cART cost-sharing amount was calculated per 30-day supply of the entire cART regimen. Adherence was dichotomized for analysis at the clinically meaningful thresholds of 95% and 78%. The dichotomized adherence outcomes were separately modeled using population-averaged generalized estimating equations (GEEs) with time-varying and time-constant covariates and an exchangeable working correlation structure. Independent variables included cost-sharing amount; sequential quarter number after cART initiation; interaction between cost-sharing amount and sequential quarter number (to capture any changes in the association of cost sharing with adherence that may occur over time after initiation of cART); and patient demographic, clinical, and insurance characteristics. For each sequential quarter after cART initiation, the GEE models were used to generate average predicted probabilities of adherence reaching each threshold (95% and 78%) at cost-sharing levels of $25, $75, and $144, which represented the 25th, 75th, and 90th percentiles of the cost-sharing distribution, respectively. The study sample included 19,199 patient-quarters and 3,731 patients: mean age 41.1 years; 83.2% male; mean (SD) duration of post-index period 5.1 (4.2) quarters; mean (SD) daily cART pill count 3.2 (2.2); mean (median) cost sharing per 30-day supply of the entire cART regimen $67 ($40). In the unadjusted analyses of patient-quarters, mean adherence ranged from 97.2% for cost-sharing levels within the 0-20th percentiles (from $0 to $20 per 30-day cART supply) to 94.0% for cost-sharing levels exceeding the 80th percentile (from $84 to $3,832 per 30-day cART supply). In the adjusted analyses for the second quarter (25th percentile of follow-up duration, n = 3,117 cases still under observation) at the cost-sharing levels of $25, $75, and $144, the predicted probabilities of at least 95% adherence were 0.782, 0.770, and 0.752, respectively, and the predicted probabilities of at least 78% adherence were 0.936, 0.931, and 0.924, respectively. The differences in the predicted probabilities of adherence grew over time. By the seventh quarter (the 75th percentile of follow-up duration, n = 1,096 cases still under observation), the predicted probabilities were 0.773, 0.746, and 0.707 for 95% adherence and 0.933, 0.922, and 0.904 for 78% adherence at cost-sharing levels of $25, $75, and $144, respectively. Increasing cART prescription cost sharing was associated with modestly decreased probability of maintaining clinically meaningful levels of cART adherence.
Does the probability of developing ocular trauma-related visual deficiency differ between genders?
Blanco-Hernández, Dulce Milagros Razo; Valencia-Aguirre, Jessica Daniela; Lima-Gómez, Virgilio
2011-01-01
Ocular trauma affects males more often than females, but the impact of this condition regarding visual prognosis is unknown. We undertook this study to compare the probability of developing ocular trauma-related visual deficiency between genders, as estimated by the ocular trauma score (OTS). We designed an observational, retrospective, comparative, cross-sectional and open-label study. Female patients aged ≥6 years with ocular trauma were included and matched by age and ocular wall status with male patients at a 1:2 male/female ratio. Initial trauma features and the probability of developing visual deficiency (best corrected visual acuity <20/40) 6 months after the injury, as estimated by the OTS, were compared between genders. The proportion and 95% confidence intervals (95% CI) of visual deficiency 6 months after the injury were estimated. Ocular trauma features and the probability of developing visual deficiency were compared between genders (χ(2) and Fisher's exact test); p value <0.05 was considered significant. Included were 399 eyes (133 from females and 266 from males). Mean age of patients was 25.7 ± 14.6 years. Statistical differences existed in the proportion of zone III in closed globe trauma (p = 0.01) and types A (p = 0.04) and type B (p = 0.02) in open globe trauma. The distribution of the OTS categories was similar for both genders (category 5: p = 0.9); the probability of developing visual deficiency was 32.6% (95% CI = 24.6 to 40.5) in females and 33.2% (95% CI = 27.6 to 38.9) in males (p = 0.9). The probability of developing ocular trauma-related visual deficiency was similar for both genders. The same standard is required.
NASA Technical Reports Server (NTRS)
Solakiewiz, Richard; Koshak, William
2008-01-01
Continuous monitoring of the ratio of cloud flashes to ground flashes may provide a better understanding of thunderstorm dynamics, intensification, and evolution, and it may be useful in severe weather warning. The National Lighting Detection Network TM (NLDN) senses ground flashes with exceptional detection efficiency and accuracy over most of the continental United States. A proposed Geostationary Lightning Mapper (GLM) aboard the Geostationary Operational Environmental Satellite (GOES-R) will look at the western hemisphere, and among the lightning data products to be made available will be the fundamental optical flash parameters for both cloud and ground flashes: radiance, area, duration, number of optical groups, and number of optical events. Previous studies have demonstrated that the optical flash parameter statistics of ground and cloud lightning, which are observable from space, are significantly different. This study investigates a Bayesian network methodology for discriminating lightning flash type (ground or cloud) using the lightning optical data and ancillary GOES-R data. A Directed Acyclic Graph (DAG) is set up with lightning as a "root" and data observed by GLM as the "leaves." This allows for a direct calculation of the joint probability distribution function for the lighting type and radiance, area, etc. Initially, the conditional probabilities that will be required can be estimated from the Lightning Imaging Sensor (LIS) and the Optical Transient Detector (OTD) together with NLDN data. Directly manipulating the joint distribution will yield the conditional probability that a lightning flash is a ground flash given the evidence, which consists of the observed lightning optical data [and possibly cloud data retrieved from the GOES-R Advanced Baseline Imager (ABI) in a more mature Bayesian network configuration]. Later, actual GLM and NLDN data can be used to refine the estimates of the conditional probabilities used in the model; i.e., the Bayesian network is a learning network. Methods for efficient calculation of the conditional probabilities (e.g., an algorithm using junction trees), finding data conflicts, goodness of fit, and dealing with missing data will also be addressed.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.
2016-01-01
The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733
Quantum Dynamics Study of the Isotopic Effect on Capture Reactions: HD, D2 + CH3
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
Time-dependent wave-packet-propagation calculations are reported for the isotopic reactions, HD + CH3 and D2 + CH3, in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probabilities for different initial rotational-vibrational states are presented in this study. This study shows that excitations of the HD(D2) enhances the reactivities; whereas the excitations of the CH3 umbrella mode have the opposite effects. This is consistent with the reaction of H2 + CH3. The comparison of these three isotopic reactions also shows the isotopic effects in the initial-state-selected reaction probabilities. The cumulative reaction probabilities (CRP) are obtained by summing over initial-state-selected reaction probabilities. The energy-shift approximation to account for the contribution of degrees of freedom missing in the six dimensionality calculation is employed to obtain approximate full-dimensional CRPs. The rate constant comparison shows H2 + CH3 reaction has the biggest reactivity, then HD + CH3, and D2 + CH3 has the smallest.
A discussion on the origin of quantum probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel
We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Hybrid Approaches and Industrial Applications of Pattern Recognition,
1980-10-01
emphasized that the probability distribution in (9) is correct only under the assumption that P( wIx ) is known exactly. In practice this assumption will...sufficient precision. The alternative would be to take the probability distribution of estimates of P( wix ) into account in the analysis. However, from the
Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.
ERIC Educational Resources Information Center
Egghe, Leo; Rousseau, Ronald
1995-01-01
Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2015-12-01
The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.
Spudich, P.; Guatteri, Mariagiovanna; Otsuki, K.; Minagawa, J.
1998-01-01
Dislocation models of the 1995 Hyogo-ken Nanbu (Kobe) earthquake derived by Yoshida et al. (1996) show substantial changes in direction of slip with time at specific points on the Nojima and Rokko fault systems, as do striations we observed on exposures of the Nojima fault surface on Awaji Island. Spudich (1992) showed that the initial stress, that is, the shear traction on the fault before the earthquake origin time, can be derived at points on the fault where the slip rake rotates with time if slip velocity and stress change are known at these points. From Yoshida's slip model, we calculated dynamic stress changes on the ruptured fault surfaces. To estimate errors, we compared the slip velocities and dynamic stress changes of several published models of the earthquake. The differences between these models had an exponential distribution, not gaussian. We developed a Bayesian method to estimate the probability density function (PDF) of initial stress from the striations and from Yoshida's slip model. Striations near Toshima and Hirabayashi give initial stresses of about 13 and 7 MPa, respectively. We obtained initial stresses of about 7 to 17 MPa at depths of 2 to 10 km on a subset of points on the Nojima and Rokko fault systems. Our initial stresses and coseismic stress changes agree well with postearthquake stresses measured by hydrofracturing in deep boreholes near Hirabayashi and Ogura on Awaji Island. Our results indicate that the Nojima fault slipped at very low shear stress, and fractional stress drop was complete near the surface and about 32% below depths of 2 km. Our results at depth depend on the accuracy of the rake rotations in Yoshida's model, which are probably correct on the Nojima fault but debatable on the Rokko fault. Our results imply that curved or cross-cutting fault striations can be formed in a single earthquake, contradicting a common assumption of structural geology.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Rupture preparation process controlled by surface roughness on meter-scale laboratory fault
NASA Astrophysics Data System (ADS)
Yamashita, Futoshi; Fukuyama, Eiichi; Xu, Shiqing; Mizoguchi, Kazuo; Kawakata, Hironori; Takizawa, Shigeru
2018-05-01
We investigate the effect of fault surface roughness on rupture preparation characteristics using meter-scale metagabbro specimens. We repeatedly conducted the experiments with the same pair of rock specimens to make the fault surface rough. We obtained three experimental results under the same experimental conditions (6.7 MPa of normal stress and 0.01 mm/s of loading rate) but at different roughness conditions (smooth, moderately roughened, and heavily roughened). During each experiment, we observed many stick-slip events preceded by precursory slow slip. We investigated when and where slow slip initiated by using the strain gauge data processed by the Kalman filter algorithm. The observed rupture preparation processes on the smooth fault (i.e. the first experiment among the three) showed high repeatability of the spatiotemporal distributions of slow slip initiation. Local stress measurements revealed that slow slip initiated around the region where the ratio of shear to normal stress (τ/σ) was the highest as expected from finite element method (FEM) modeling. However, the exact location of slow slip initiation was where τ/σ became locally minimum, probably due to the frictional heterogeneity. In the experiment on the moderately roughened fault, some irregular events were observed, though the basic characteristics of other regular events were similar to those on the smooth fault. Local stress data revealed that the spatiotemporal characteristics of slow slip initiation and the resulting τ/σ drop for irregular events were different from those for regular ones even under similar stress conditions. On the heavily roughened fault, the location of slow slip initiation was not consistent with τ/σ anymore because of the highly heterogeneous static friction on the fault, which also decreased the repeatability of spatiotemporal distributions of slow slip initiation. These results suggest that fault surface roughness strongly controls the rupture preparation process, and generally increases its complexity with the degree of roughness.
Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.
2017-07-17
The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
NASA Astrophysics Data System (ADS)
Cajiao Vélez, F.; Kamiński, J. Z.; Krajewska, K.
2018-04-01
High-energy photoionization driven by short and circularly-polarized laser pulses is studied in the framework of the relativistic strong-field approximation. The saddle-point analysis of the integrals defining the probability amplitude is used to determine the general properties of the probability distributions. Additionally, an approximate solution to the saddle-point equation is derived. This leads to the concept of the three-dimensional spiral of life in momentum space, around which the ionization probability distribution is maximum. We demonstrate that such spiral is also obtained from a classical treatment.
Mathematical Analysis of Vehicle Delivery Scale of Bike-Sharing Rental Nodes
NASA Astrophysics Data System (ADS)
Zhai, Y.; Liu, J.; Liu, L.
2018-04-01
Aiming at the lack of scientific and reasonable judgment of vehicles delivery scale and insufficient optimization of scheduling decision, based on features of the bike-sharing usage, this paper analyses the applicability of the discrete time and state of the Markov chain, and proves its properties to be irreducible, aperiodic and positive recurrent. Based on above analysis, the paper has reached to the conclusion that limit state (steady state) probability of the bike-sharing Markov chain only exists and is independent of the initial probability distribution. Then this paper analyses the difficulty of the transition probability matrix parameter statistics and the linear equations group solution in the traditional solving algorithm of the bike-sharing Markov chain. In order to improve the feasibility, this paper proposes a "virtual two-node vehicle scale solution" algorithm which considered the all the nodes beside the node to be solved as a virtual node, offered the transition probability matrix, steady state linear equations group and the computational methods related to the steady state scale, steady state arrival time and scheduling decision of the node to be solved. Finally, the paper evaluates the rationality and accuracy of the steady state probability of the proposed algorithm by comparing with the traditional algorithm. By solving the steady state scale of the nodes one by one, the proposed algorithm is proved to have strong feasibility because it lowers the level of computational difficulty and reduces the number of statistic, which will help the bike-sharing companies to optimize the scale and scheduling of nodes.
Evolution of Particle Size Distributions in Fragmentation Over Time
NASA Astrophysics Data System (ADS)
Charalambous, C. A.; Pike, W. T.
2013-12-01
We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.
Schmidt, Benedikt R
2003-08-01
The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.
Bayesian data analysis tools for atomic physics
NASA Astrophysics Data System (ADS)
Trassinelli, Martino
2017-10-01
We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested_fit to calculate the different probability distributions and other related quantities. Nested_fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.
NASA Astrophysics Data System (ADS)
Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.
2017-04-01
We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.
Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, F. E.; Malamud, B. D.
2012-04-01
During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espinoza, I; Peschke, P; Karger, C
Purpose: In radiotherapy, it is important to predict the response of tumour to irradiation prior to the treatment. Mathematical modelling of tumour control probability (TCP) based on the dose distribution, medical imaging and other biological information may help to improve this prediction and to optimize the treatment plan. The aim of this work is to develop an image based 3D multiscale radiobiological model, which describes the growth and the response to radiotherapy of hypoxic tumors. Methods: The computer model is based on voxels, containing tumour, normal (including capillary) and dead cells. Killing of tumour cells due to irradiation is calculatedmore » by the Linear Quadratic Model (extended for hypoxia), and the proliferation and resorption of cells are modelled by exponential laws. The initial shape of the tumours is taken from CT images and the initial vascular and cell density information from PET and/or MR images. Including the fractionation regime and the physical dose distribution of the radiation treatment, the model simulates the spatial-temporal evolution of the tumor. Additionally, the dose distribution may be biologically optimized. Results: The model describes the appearance of hypoxia during tumour growth and the reoxygenation processes during radiotherapy. Among other parameters, the TCP is calculated for different dose distributions. The results are in accordance with published results. Conclusion: The simulation model may contribute to the understanding of the influence of biological parameters on tumor response during treatment, and specifically on TCP. It may be used to implement dose-painting approaches. Experimental and clinical validation is needed. This study is supported by a grant from the Ministry of Education of Chile, Programa Mece Educacion Superior (2)« less
NASA Astrophysics Data System (ADS)
Kalaitzis, P.; Danakas, S.; Lépine, F.; Bordas, C.; Cohen, S.
2018-05-01
Photoionization microscopy (PM) is an experimental method allowing for high-resolution measurements of the electron current probability density in the case of photoionization of an atom in an external uniform static electric field. PM is based on high-resolution velocity-map imaging and offers the unique opportunity to observe the quantum oscillatory spatial structure of the outgoing electron flux. We present the basic elements of the quantum-mechanical theoretical framework of PM for hydrogenic systems near threshold. Our development is based on the computationally more convenient semiparabolic coordinate system. Theoretical results are first subjected to a quantitative comparison with hydrogenic images corresponding to quasibound states and a qualitative comparison with nonresonant images of multielectron atoms. Subsequently, particular attention is paid on the structure of the electron's momentum distribution transversely to the static field (i.e., of the angularly integrated differential cross-section as a function of electron energy and radius of impact on the detector). Such 2D maps provide at a glance a complete picture of the peculiarities of the differential cross-section over the entire near-threshold energy range. Hydrogenic transverse momentum distributions are computed for the cases of the ground and excited initial states and single- and two-photon ionization schemes. Their characteristics of general nature are identified by comparing the hydrogenic distributions among themselves, as well as with a presently recorded experimental distribution concerning the magnesium atom. Finally, specificities attributed to different target atoms, initial states, and excitation scenarios are also discussed, along with directions of further work.
Hammond, Karl D.; Wirth, Brian D.
2014-10-09
Here, we present atomistic simulations that show the effect of surface orientation on helium depth distributions and surface feature formation as a result of low-energy helium plasma exposure. We find a pronounced effect of surface orientation on the initial depth of implanted helium ions, as well as a difference in reflection and helium retention across different surface orientations. Our results indicate that single helium interstitials are sufficient to induce the formation of adatom/substitutional helium pairs under certain highly corrugated tungsten surfaces, such as {1 1 1}-orientations, leading to the formation of a relatively concentrated layer of immobile helium immediately belowmore » the surface. The energies involved for helium-induced adatom formation on {1 1 1} and {2 1 1} surfaces are exoergic for even a single adatom very close to the surface, while {0 0 1} and {0 1 1} surfaces require two or even three helium atoms in a cluster before a substitutional helium cluster and adatom will form with reasonable probability. This phenomenon results in much higher initial helium retention during helium plasma exposure to {1 1 1} and {2 1 1} tungsten surfaces than is observed for {0 0 1} or {0 1 1} surfaces and is much higher than can be attributed to differences in the initial depth distributions alone. Lastly, the layer thus formed may serve as nucleation sites for further bubble formation and growth or as a source of material embrittlement or fatigue, which may have implications for the formation of tungsten “fuzz” in plasma-facing divertors for magnetic-confinement nuclear fusion reactors and/or the lifetime of such divertors.« less
Characterising RNA secondary structure space using information entropy
2013-01-01
Comparative methods for RNA secondary structure prediction use evolutionary information from RNA alignments to increase prediction accuracy. The model is often described in terms of stochastic context-free grammars (SCFGs), which generate a probability distribution over secondary structures. It is, however, unclear how this probability distribution changes as a function of the input alignment. As prediction programs typically only return a single secondary structure, better characterisation of the underlying probability space of RNA secondary structures is of great interest. In this work, we show how to efficiently compute the information entropy of the probability distribution over RNA secondary structures produced for RNA alignments by a phylo-SCFG, and implement it for the PPfold model. We also discuss interpretations and applications of this quantity, including how it can clarify reasons for low prediction reliability scores. PPfold and its source code are available from http://birc.au.dk/software/ppfold/. PMID:23368905
Nielsen, Bjørn G; Jensen, Morten Ø; Bohr, Henrik G
2003-01-01
The structure of enkephalin, a small neuropeptide with five amino acids, has been simulated on computers using molecular dynamics. Such simulations exhibit a few stable conformations, which also have been identified experimentally. The simulations provide the possibility to perform cluster analysis in the space defined by potentially pharmacophoric measures such as dihedral angles, side-chain orientation, etc. By analyzing the statistics of the resulting clusters, the probability distribution of the side-chain conformations may be determined. These probabilities allow us to predict the selectivity of [Leu]enkephalin and [Met]enkephalin to the known mu- and delta-type opiate receptors to which they bind as agonists. Other plausible consequences of these probability distributions are discussed in relation to the way in which they may influence the dynamics of the synapse. Copyright 2003 Wiley Periodicals, Inc. Biopolymers (Pept Sci) 71: 577-592, 2003
NASA Astrophysics Data System (ADS)
Vitanov, Nikolay V.
2018-05-01
In the experimental determination of the population transfer efficiency between discrete states of a coherently driven quantum system it is often inconvenient to measure the population of the target state. Instead, after the interaction that transfers the population from the initial state to the target state, a second interaction is applied which brings the system back to the initial state, the population of which is easy to measure and normalize. If the transition probability is p in the forward process, then classical intuition suggests that the probability to return to the initial state after the backward process should be p2. However, this classical expectation is generally misleading because it neglects interference effects. This paper presents a rigorous theoretical analysis based on the SU(2) and SU(3) symmetries of the propagators describing the evolution of quantum systems with two and three states, resulting in explicit analytic formulas that link the two-step probabilities to the single-step ones. Explicit examples are given with the popular techniques of rapid adiabatic passage and stimulated Raman adiabatic passage. The present results suggest that quantum-mechanical probabilities degrade faster in repeated processes than classical probabilities. Therefore, the actual single-pass efficiencies in various experiments, calculated from double-pass probabilities, might have been greater than the reported values.
Particle Size Reduction in Geophysical Granular Flows: The Role of Rock Fragmentation
NASA Astrophysics Data System (ADS)
Bianchi, G.; Sklar, L. S.
2016-12-01
Particle size reduction in geophysical granular flows is caused by abrasion and fragmentation, and can affect transport dynamics by altering the particle size distribution. While the Sternberg equation is commonly used to predict the mean abrasion rate in the fluvial environment, and can also be applied to geophysical granular flows, predicting the evolution of the particle size distribution requires a better understanding the controls on the rate of fragmentation and the size distribution of resulting particle fragments. To address this knowledge gap we are using single-particle free-fall experiments to test for the influence of particle size, impact velocity, and rock properties on fragmentation and abrasion rates. Rock types tested include granodiorite, basalt, and serpentinite. Initial particle masses and drop heights range from 20 to 1000 grams and 0.1 to 3.0 meters respectively. Preliminary results of free-fall experiments suggest that the probability of fragmentation varies as a power function of kinetic energy on impact. The resulting size distributions of rock fragments can be collapsed by normalizing by initial particle mass, and can be fit with a generalized Pareto distribution. We apply the free-fall results to understand the evolution of granodiorite particle-size distributions in granular flow experiments using rotating drums ranging in diameter from 0.2 to 4.0 meters. In the drums, we find that the rates of silt production by abrasion and gravel production by fragmentation scale with drum size. To compare these rates with free-fall results we estimate the particle impact frequency and velocity. We then use population balance equations to model the evolution of particle size distributions due to the combined effects of abrasion and fragmentation. Finally, we use the free-fall and drum experimental results to model particle size evolution in Inyo Creek, a steep, debris-flow dominated catchment, and compare model results to field measurements.
Scale invariance and universality in economic phenomena
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Amaral, L. A. N.; Gopikrishnan, P.; Plerou, V.; Salinger, M. A.
2002-03-01
This paper discusses some of the similarities between work being done by economists and by computational physicists seeking to contribute to economics. We also mention some of the differences in the approaches taken and seek to justify these different approaches by developing the argument that by approaching the same problem from different points of view, new results might emerge. In particular, we review two such new results. Specifically, we discuss the two newly discovered scaling results that appear to be `universal', in the sense that they hold for widely different economies as well as for different time periods: (i) the fluctuation of price changes of any stock market is characterized by a probability density function, which is a simple power law with exponent -4 extending over 102 standard deviations (a factor of 108 on the y-axis); this result is analogous to the Gutenberg-Richter power law describing the histogram of earthquakes of a given strength; (ii) for a wide range of economic organizations, the histogram that shows how size of organization is inversely correlated to fluctuations in size with an exponent ≈0.2. Neither of these two new empirical laws has a firm theoretical foundation. We also discuss results that are reminiscent of phase transitions in spin systems, where the divergent behaviour of the response function at the critical point (zero magnetic field) leads to large fluctuations. We discuss a curious `symmetry breaking' for values of Σ above a certain threshold value Σc here Σ is defined to be the local first moment of the probability distribution of demand Ω - the difference between the number of shares traded in buyer-initiated and seller-initiated trades. This feature is qualitatively identical to the behaviour of the probability density of the magnetization for fixed values of the inverse temperature.
Exact probability distribution function for the volatility of cumulative production
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Klümper, Andreas
2018-04-01
In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
The case of escape probability as linear in short time
NASA Astrophysics Data System (ADS)
Marchewka, A.; Schuss, Z.
2018-02-01
We derive rigorously the short-time escape probability of a quantum particle from its compactly supported initial state, which has a discontinuous derivative at the boundary of the support. We show that this probability is linear in time, which seems to be a new result. The novelty of our calculation is the inclusion of the boundary layer of the propagated wave function formed outside the initial support. This result has applications to the decay law of the particle, to the Zeno behaviour, quantum absorption, time of arrival, quantum measurements, and more.
Wang, S Q; Zhang, H Y; Li, Z L
2016-10-01
Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.
The Detection of Signals in Impulsive Noise.
1983-06-01
ASSI FICATION/ DOWN GRADING SCHEOUL1E * I1S. DISTRIBUTION STATEMENT (of th0i0 Rhport) Approved for Public Release; Distribucion Unlimited * 17...has a symmetric distribution, sgn(x i) will be -1 with probability 1/2 and +1 with probability 1/2. Considering the sum of observations as 0 binomial
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha
The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Steady-state distributions of probability fluxes on complex networks
NASA Astrophysics Data System (ADS)
Chełminiak, Przemysław; Kurzyński, Michał
2017-02-01
We consider a simple model of the Markovian stochastic dynamics on complex networks to examine the statistical properties of the probability fluxes. The additional transition, called hereafter a gate, powered by the external constant force breaks a detailed balance in the network. We argue, using a theoretical approach and numerical simulations, that the stationary distributions of the probability fluxes emergent under such conditions converge to the Gaussian distribution. By virtue of the stationary fluctuation theorem, its standard deviation depends directly on the square root of the mean flux. In turn, the nonlinear relation between the mean flux and the external force, which provides the key result of the present study, allows us to calculate the two parameters that entirely characterize the Gaussian distribution of the probability fluxes both close to as well as far from the equilibrium state. Also, the other effects that modify these parameters, such as the addition of shortcuts to the tree-like network, the extension and configuration of the gate and a change in the network size studied by means of computer simulations are widely discussed in terms of the rigorous theoretical predictions.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Using the Lorenz Curve to Characterize Risk Predictiveness and Etiologic Heterogeneity
Mauguen, Audrey; Begg, Colin B.
2017-01-01
The Lorenz curve is a graphical tool that is used widely in econometrics. It represents the spread of a probability distribution, and its traditional use has been to characterize population distributions of wealth or income, or more specifically, inequalities in wealth or income. However, its utility in public health research has not been broadly established. The purpose of this article is to explain its special usefulness for characterizing the population distribution of disease risks, and in particular for identifying the precise disease burden that can be predicted to occur in segments of the population that are known to have especially high (or low) risks, a feature that is important for evaluating the yield of screening or other disease prevention initiatives. We demonstrate that, although the Lorenz curve represents the distribution of predicted risks in a population at risk for the disease, in fact it can be estimated from a case–control study conducted in the population without the need for information on absolute risks. We explore two different estimation strategies and compare their statistical properties using simulations. The Lorenz curve is a statistical tool that deserves wider use in public health research. PMID:27096256
Particle detection and non-detection in a quantum time of arrival measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sombillo, Denny Lane B., E-mail: dsombillo@nip.upd.edu.ph; Galapon, Eric A.
2016-01-15
The standard time-of-arrival distribution cannot reproduce both the temporal and the spatial profile of the modulus squared of the time-evolved wave function for an arbitrary initial state. In particular, the time-of-arrival distribution gives a non-vanishing probability even if the wave function is zero at a given point for all values of time. This poses a problem in the standard formulation of quantum mechanics where one quantizes a classical observable and uses its spectral resolution to calculate the corresponding distribution. In this work, we show that the modulus squared of the time-evolved wave function is in fact contained in one ofmore » the degenerate eigenfunctions of the quantized time-of-arrival operator. This generalizes our understanding of quantum arrival phenomenon where particle detection is not a necessary requirement, thereby providing a direct link between time-of-arrival quantization and the outcomes of the two-slit experiment. -- Highlights: •The time-evolved position density is contained in the standard TOA distribution. •Particle may quantum mechanically arrive at a given point without being detected. •The eigenstates of the standard TOA operator are linked to the two-slit experiment.« less
Recoil-ion momentum distributions for transfer ionization in fast proton-He collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, H.T.; Reinhed, P.; Schuch, R.
2005-07-15
We present high-luminosity experimental investigations of the transfer ionization (TI:p+He{yields}H{sup 0}+He{sup 2+}+e{sup -}) process in collisions between fast protons and neutral helium atoms in the earlier inaccessibly high-energy range 1.4-5.8 MeV. The protons were stored in the heavy-ion storage and cooler ring CRYRING, where they intersected a narrow supersonic helium gas jet. We discuss the longitudinal recoil-ion momentum distribution, as measured by means of cold-target recoil-ion momentum spectroscopy and find that this distribution splits into two completely separated peaks at the high end of our energy range. These separate contributions are discussed in terms of the earlier proposed Thomas TImore » (TTI) and kinematic TI mechansims. The cross section of the TTI process is found to follow a {sigma}{proportional_to}v{sup -b} dependence with b=10.78{+-}0.27 in accordance with the expected v{sup -11} asymptotic behavior. Further, we discuss the probability for shake-off accompanying electron transfer and the relation of this TI mechanism to photodouble ionization. Finally the influence of the initial-state electron velocity distribution on the TTI process is discussed.« less
The shock waves in decaying supersonic turbulence
NASA Astrophysics Data System (ADS)
Smith, M. D.; Mac Low, M.-M.; Zuev, J. M.
2000-04-01
We here analyse numerical simulations of supersonic, hypersonic and magnetohydrodynamic turbulence that is free to decay. Our goals are to understand the dynamics of the decay and the characteristic properties of the shock waves produced. This will be useful for interpretation of observations of both motions in molecular clouds and sources of non-thermal radiation. We find that decaying hypersonic turbulence possesses an exponential tail of fast shocks and an exponential decay in time, i.e. the number of shocks is proportional to t exp (-ktv) for shock velocity jump v and mean initial wavenumber k. In contrast to the velocity gradients, the velocity Probability Distribution Function remains Gaussian with a more complex decay law. The energy is dissipated not by fast shocks but by a large number of low Mach number shocks. The power loss peaks near a low-speed turn-over in an exponential distribution. An analytical extension of the mapping closure technique is able to predict the basic decay features. Our analytic description of the distribution of shock strengths should prove useful for direct modeling of observable emission. We note that an exponential distribution of shocks such as we find will, in general, generate very low excitation shock signatures.
Product Distribution Theory for Control of Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Lee, Chia Fan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.
Statistical characterization of discrete conservative systems: The web map
NASA Astrophysics Data System (ADS)
Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino
2017-10-01
We numerically study the two-dimensional, area preserving, web map. When the map is governed by ergodic behavior, it is, as expected, correctly described by Boltzmann-Gibbs statistics, based on the additive entropic functional SB G[p (x ) ] =-k ∫d x p (x ) lnp (x ) . In contrast, possible ergodicity breakdown and transitory sticky dynamical behavior drag the map into the realm of generalized q statistics, based on the nonadditive entropic functional Sq[p (x ) ] =k 1/-∫d x [p(x ) ] q q -1 (q ∈R ;S1=SB G ). We statistically describe the system (probability distribution of the sum of successive iterates, sensitivity to the initial condition, and entropy production per unit time) for typical values of the parameter that controls the ergodicity of the map. For small (large) values of the external parameter K , we observe q -Gaussian distributions with q =1.935 ⋯ (Gaussian distributions), like for the standard map. In contrast, for intermediate values of K , we observe a different scenario, due to the fractal structure of the trajectories embedded in the chaotic sea. Long-standing non-Gaussian distributions are characterized in terms of the kurtosis and the box-counting dimension of chaotic sea.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
Halász, Gábor J; Csehi, András; Vibók, Ágnes; Cederbaum, Lorenz S
2014-12-26
Previous works have shown that dressing of diatomic molecules by standing or by running laser waves gives rise to the appearance of so-called light-induced conical intersections (LICIs). Because of the strong nonadiabatic couplings, the existence of such LICIs may significantly change the dynamical properties of a molecular system. In our former paper (J. Phys. Chem. A 2013, 117, 8528), the photodissociation dynamics of the D(2)(+) molecule were studied in the LICI framework starting the initial vibrational nuclear wave packet from the superposition of all the vibrational states initially produced by ionizing D(2). The present work complements our previous investigation by letting the initial nuclear wave packets start from different individual vibrational levels of D(2)(+), in particular, above the energy of the LICI. The kinetic energy release spectra, the total dissociation probabilities, and the angular distributions of the photofragments are calculated and discussed. An interesting phenomenon has been found in the spectra of the photofragments. Applying the light-induced adiabatic picture supported by LICI, explanations are given for the unexpected structure of the spectra.
NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Measures for a multidimensional multiverse
NASA Astrophysics Data System (ADS)
Chung, Hyeyoun
2015-04-01
We explore the phenomenological implications of generalizing the causal patch and fat geodesic measures to a multidimensional multiverse, where the vacua can have differing numbers of large dimensions. We consider a simple model in which the vacua are nucleated from a D -dimensional parent spacetime through dynamical compactification of the extra dimensions, and compute the geometric contribution to the probability distribution of observations within the multiverse for each measure. We then study how the shape of this probability distribution depends on the time scales for the existence of observers, for vacuum domination, and for curvature domination (tobs,tΛ , and tc, respectively.) In this work we restrict ourselves to bubbles with positive cosmological constant, Λ . We find that in the case of the causal patch cutoff, when the bubble universes have p +1 large spatial dimensions with p ≥2 , the shape of the probability distribution is such that we obtain the coincidence of time scales tobs˜tΛ˜tc . Moreover, the size of the cosmological constant is related to the size of the landscape. However, the exact shape of the probability distribution is different in the case p =2 , compared to p ≥3 . In the case of the fat geodesic measure, the result is even more robust: the shape of the probability distribution is the same for all p ≥2 , and we once again obtain the coincidence tobs˜tΛ˜tc . These results require only very mild conditions on the prior probability of the distribution of vacua in the landscape. Our work shows that the observed double coincidence of time scales is a robust prediction even when the multiverse is generalized to be multidimensional; that this coincidence is not a consequence of our particular Universe being (3 +1 )-dimensional; and that this observable cannot be used to preferentially select one measure over another in a multidimensional multiverse.
NASA Astrophysics Data System (ADS)
Tremblin, P.; Schneider, N.; Minier, V.; Didelon, P.; Hill, T.; Anderson, L. D.; Motte, F.; Zavagno, A.; André, Ph.; Arzoumanian, D.; Audit, E.; Benedettini, M.; Bontemps, S.; Csengeri, T.; Di Francesco, J.; Giannini, T.; Hennemann, M.; Nguyen Luong, Q.; Marston, A. P.; Peretto, N.; Rivera-Ingraham, A.; Russeil, D.; Rygl, K. L. J.; Spinoglio, L.; White, G. J.
2014-04-01
Aims: Ionization feedback should impact the probability distribution function (PDF) of the column density of cold dust around the ionized gas. We aim to quantify this effect and discuss its potential link to the core and initial mass function (CMF/IMF). Methods: We used Herschel column density maps of several regions observed within the HOBYS key program in a systematic way: M 16, the Rosette and Vela C molecular clouds, and the RCW 120 H ii region. We computed the PDFs in concentric disks around the main ionizing sources, determined their properties, and discuss the effect of ionization pressure on the distribution of the column density. Results: We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a "double-peak" or an enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas, while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. Such a double peak is not visible for all clouds associated with ionization fronts, but it depends on the relative importance of ionization pressure and turbulent ram pressure. A power-law tail is present for higher column densities, which are generally ascribed to the effect of gravity. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion that is able to disentangle triggered star formation from pre-existing star formation. Conclusions: In the context of the gravo-turbulent scenario for the origin of the CMF/IMF, the double-peaked or enlarged shape of the PDF may affect the formation of objects at both the low-mass and the high-mass ends of the CMF/IMF. In particular, a broader PDF is required by the gravo-turbulent scenario to fit the IMF properly with a reasonable initial Mach number for the molecular cloud. Since other physical processes (e.g., the equation of state and the variations among the core properties) have already been said to broaden the PDF, the relative importance of the different effects remains an open question. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Estimation of distribution overlap of urn models.
Hampton, Jerrad; Lladser, Manuel E
2012-01-01
A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.
Diffusion of active chiral particles
NASA Astrophysics Data System (ADS)
Sevilla, Francisco J.
2016-12-01
The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.
A Search Model for Imperfectly Detected Targets
NASA Technical Reports Server (NTRS)
Ahumada, Albert
2012-01-01
Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Theoretical cratering rates on Ida, Mathilde, Eros and Gaspra
NASA Astrophysics Data System (ADS)
Jeffers, S. V.; Asher, D. J.; Bailey, M. E.
2002-11-01
We investigate the main influences on crater size distributions, by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY (Chambers 1999) numerical integrator. The use of an efficient, Öpik-type, collision code enables the calculation of a velocity histogram and the probability of impact. This when combined with a crater scaling law and an impactor size distribution, through a Monte Carlo method, results in a crater size distribution. The resulting crater probability distributions are in good agreement with observed crater distributions on these asteroids.
Velocity distributions among colliding asteroids
NASA Technical Reports Server (NTRS)
Bottke, William F., Jr.; Nolan, Michael C.; Greenberg, Richard; Kolvoord, Robert A.
1994-01-01
The probability distribution for impact velocities between two given asteroids is wide, non-Gaussian, and often contains spikes according to our new method of analysis in which each possible orbital geometry for collision is weighted according to its probability. An average value would give a good representation only if the distribution were smooth and narrow. Therefore, the complete velocity distribution we obtain for various asteroid populations differs significantly from published histograms of average velocities. For all pairs among the 682 asteroids in the main-belt with D greater than 50 km, we find that our computed velocity distribution is much wider than previously computed histograms of average velocities. In this case, the most probable impact velocity is approximately 4.4 km/sec, compared with the mean impact velocity of 5.3 km/sec. For cases of a single asteroid (e.g., Gaspra or Ida) relative to an impacting population, the distribution we find yields lower velocities than previously reported by others. The width of these velocity distributions implies that mean impact velocities must be used with caution when calculating asteroid collisional lifetimes or crater-size distributions. Since the most probable impact velocities are lower than the mean, disruption events may occur less frequently than previously estimated. However, this disruption rate may be balanced somewhat by an apparent increase in the frequency of high-velocity impacts between asteroids. These results have implications for issues such as asteroidal disruption rates, the amount/type of impact ejecta available for meteoritical delivery to the Earth, and the geology and evolution of specific asteroids like Gaspra.
The Probability Distribution for a Biased Spinner
ERIC Educational Resources Information Center
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
NASA Astrophysics Data System (ADS)
Khajehei, S.; Madadgar, S.; Moradkhani, H.
2014-12-01
The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).
Northern peatland initiation lagged abrupt increases in deglacial atmospheric CH4
Reyes, Alberto V.; Cooke, Colin A.
2011-01-01
Peatlands are a key component of the global carbon cycle. Chronologies of peatland initiation are typically based on compiled basal peat radiocarbon (14C) dates and frequency histograms of binned calibrated age ranges. However, such compilations are problematic because poor quality 14C dates are commonly included and because frequency histograms of binned age ranges introduce chronological artefacts that bias the record of peatland initiation. Using a published compilation of 274 basal 14C dates from Alaska as a case study, we show that nearly half the 14C dates are inappropriate for reconstructing peatland initiation, and that the temporal structure of peatland initiation is sensitive to sampling biases and treatment of calibrated 14C dates. We present revised chronologies of peatland initiation for Alaska and the circumpolar Arctic based on summed probability distributions of calibrated 14C dates. These revised chronologies reveal that northern peatland initiation lagged abrupt increases in atmospheric CH4 concentration at the start of the Bølling–Allerød interstadial (Termination 1A) and the end of the Younger Dryas chronozone (Termination 1B), suggesting that northern peatlands were not the primary drivers of the rapid increases in atmospheric CH4. Our results demonstrate that subtle methodological changes in the synthesis of basal 14C ages lead to substantially different interpretations of temporal trends in peatland initiation, with direct implications for the role of peatlands in the global carbon cycle. PMID:21368146
NASA Astrophysics Data System (ADS)
Gao, Haixia; Li, Ting; Xiao, Changming
2016-05-01
When a simple system is in its nonequilibrium state, it will shift to its equilibrium state. Obviously, in this process, there are a series of nonequilibrium states. With the assistance of Bayesian statistics and hyperensemble, a probable probability distribution of these nonequilibrium states can be determined by maximizing the hyperensemble entropy. It is known that the largest probability is the equilibrium state, and the far a nonequilibrium state is away from the equilibrium one, the smaller the probability will be, and the same conclusion can also be obtained in the multi-state space. Furthermore, if the probability stands for the relative time the corresponding nonequilibrium state can stay, then the velocity of a nonequilibrium state returning back to its equilibrium can also be determined through the reciprocal of the derivative of this probability. It tells us that the far away the state from the equilibrium is, the faster the returning velocity will be; if the system is near to its equilibrium state, the velocity will tend to be smaller and smaller, and finally tends to 0 when it gets the equilibrium state.
Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.
Krishnamurthy, V; Krishnamurthy, E V
1999-03-01
A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.
A hybrid probabilistic/spectral model of scalar mixing
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance
2002-11-01
In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConchie, Seth M.; Crye, Jason Michael; Pena, Kirsten
2015-09-30
This document summarizes the effort to use active-induced time correlation techniques to measure the enrichment of bulk quantities of enriched uranium. In summary, these techniques use an external source to initiate fission chains, and the time distribution of the detected fission chain neutrons is sensitive to the fissile material enrichment. The number of neutrons emitted from a chain is driven by the multiplication of the item, and the enrichment is closely coupled to the multiplication of the item. As the enrichment increases (decreases), the multiplication increases (decreases) if the geometry is held constant. The time distribution of fission chain neutronsmore » is a complex function of the enrichment and material configuration. The enrichment contributes to the probability of a subsequent fission in a chain via the likelihood of fissioning on an even-numbered isotope versus an odd-numbered isotope. The material configuration contributes to the same probability via solid angle effects for neutrons inducing subsequent fissions and the presence of any moderating material. To simplify the ability to accurately measure the enrichment, an associated particle imaging (API) D-T neutron generator and an array of plastic scintillators are used to simultaneously image the item and detect the fission chain neutrons. The image is used to significantly limit the space of enrichment and material configuration and enable the enrichment to be determined unambiguously.« less
49 CFR 173.50 - Class 1-Definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... insensitive that there is very little probability of initiation or of transition from burning to detonation under normal conditions of transport. 1 The probability of transition from burning to detonation is... contain only extremely insensitive detonating substances and which demonstrate a negligible probability of...
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
A Gibbs sampler for Bayesian analysis of site-occupancy data
Dorazio, Robert M.; Rodriguez, Daniel Taylor
2012-01-01
1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.
NASA Astrophysics Data System (ADS)
Fulton, J. W.; Bjerklie, D. M.; Jones, J. W.; Minear, J. T.
2015-12-01
Measuring streamflow, developing, and maintaining rating curves at new streamgaging stations is both time-consuming and problematic. Hydro 21 was an initiative by the U.S. Geological Survey to provide vision and leadership to identify and evaluate new technologies and methods that had the potential to change the way in which streamgaging is conducted. Since 2014, additional trials have been conducted to evaluate some of the methods promoted by the Hydro 21 Committee. Emerging technologies such as continuous-wave radars and computationally-efficient methods such as the Probability Concept require significantly less field time, promote real-time velocity and streamflow measurements, and apply to unsteady flow conditions such as looped ratings and unsteady-flood flows. Portable and fixed-mount radars have advanced beyond the development phase, are cost effective, and readily available in the marketplace. The Probability Concept is based on an alternative velocity-distribution equation developed by C.-L. Chiu, who pioneered the concept. By measuring the surface-water velocity and correcting for environmental influences such as wind drift, radars offer a reliable alternative for measuring and computing real-time streamflow for a variety of hydraulic conditions. If successful, these tools may allow us to establish ratings more efficiently, assess unsteady flow conditions, and report real-time streamflow at new streamgaging stations.
Method for detecting and avoiding flight hazards
NASA Astrophysics Data System (ADS)
von Viebahn, Harro; Schiefele, Jens
1997-06-01
Today's aircraft equipment comprise several independent warning and hazard avoidance systems like GPWS, TCAS or weather radar. It is the pilot's task to monitor all these systems and take the appropriate action in case of an emerging hazardous situation. The developed method for detecting and avoiding flight hazards combines all potential external threats for an aircraft into a single system. It is based on an aircraft surrounding airspace model consisting of discrete volume elements. For each element of the volume the threat probability is derived or computed from sensor output, databases, or information provided via datalink. The position of the own aircraft is predicted by utilizing a probability distribution. This approach ensures that all potential positions of the aircraft within the near future are considered while weighting the most likely flight path. A conflict detection algorithm initiates an alarm in case the threat probability exceeds a threshold. An escape manoeuvre is generated taking into account all potential hazards in the vicinity, not only the one which caused the alarm. The pilot gets a visual information about the type, the locating, and severeness o the threat. The algorithm was implemented and tested in a flight simulator environment. The current version comprises traffic, terrain and obstacle hazards avoidance functions. Its general formulation allows an easy integration of e.g. weather information or airspace restrictions.
Optimizing one-shot learning with binary synapses.
Romani, Sandro; Amit, Daniel J; Amit, Yali
2008-08-01
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.
The Italian national trends in smoking initiation and cessation according to gender and education.
Sardu, C; Mereu, A; Minerba, L; Contu, P
2009-09-01
OBJECTIVES. This study aims to assess the trend in initiation and cessation of smoking across successive birth cohorts, according to gender and education, in order to provide useful suggestion for tobacco control policy. STUDY DESIGN. The study is based on data from the "Health conditions and resort to sanitary services" survey carried out in Italy from October 2004 to September 2005 by the National Institute of Statistics. Through a multisampling procedure a sample representative of the entire national territory was selected. In order to calculate trends in smoking initiation and cessation, data were stratified for birth cohorts, gender and education level, and analyzed through the life table method. The cumulative probability of smoking initiation, across subsequent generations, shows a downward trend followed by a plateau. This result highlights that there is not a shred of evidence to support the hypothesis of an anticipation in smoking initiation. The cumulative probability of quitting, across subsequent generations, follows an upward trend, highlighting the growing tendency of smokers to become an "early quitter", who give up within 30 years of age. Results suggest that the Italian antismoking approach, for the most part targeted at preventing the initiation of smoking emphasising the negative consequences, has an effect on the early smoking cessation. Health policies should reinforce the existing trend of "early quitting" through specific actions. In addition our results show that men with low education exhibit the higher probability of smoking initiation and the lower probability of early quitting, and therefore should be targeted with special attention.
Rapidly assessing the probability of exceptionally high natural hazard losses
NASA Astrophysics Data System (ADS)
Gollini, Isabella; Rougier, Jonathan
2014-05-01
One of the objectives in catastrophe modeling is to assess the probability distribution of losses for a specified period, such as a year. From the point of view of an insurance company, the whole of the loss distribution is interesting, and valuable in determining insurance premiums. But the shape of the righthand tail is critical, because it impinges on the solvency of the company. A simple measure of the risk of insolvency is the probability that the annual loss will exceed the company's current operating capital. Imposing an upper limit on this probability is one of the objectives of the EU Solvency II directive. If a probabilistic model is supplied for the loss process, then this tail probability can be computed, either directly, or by simulation. This can be a lengthy calculation for complex losses. Given the inevitably subjective nature of quantifying loss distributions, computational resources might be better used in a sensitivity analysis. This requires either a quick approximation to the tail probability or an upper bound on the probability, ideally a tight one. We present several different bounds, all of which can be computed nearly instantly from a very general event loss table. We provide a numerical illustration, and discuss the conditions under which the bound is tight. Although we consider the perspective of insurance and reinsurance companies, exactly the same issues concern the risk manager, who is typically very sensitive to large losses.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Generalized quantum theory of recollapsing homogeneous cosmologies
NASA Astrophysics Data System (ADS)
Craig, David; Hartle, James B.
2004-06-01
A sum-over-histories generalized quantum theory is developed for homogeneous minisuperspace type A Bianchi cosmological models, focusing on the particular example of the classically recollapsing Bianchi type-IX universe. The decoherence functional for such universes is exhibited. We show how the probabilities of decoherent sets of alternative, coarse-grained histories of these model universes can be calculated. We consider in particular the probabilities for classical evolution defined by a suitable coarse graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not. For these situations we show that the probability is near unity for the universe to recontract classically if it expands classically. We also determine the relative probabilities of quasiclassical trajectories for initial states of WKB form, recovering for such states a precise form of the familiar heuristic “JṡdΣ” rule of quantum cosmology, as well as a generalization of this rule to generic initial states.
Probability distributions for Markov chain based quantum walks
NASA Astrophysics Data System (ADS)
Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.
2018-01-01
We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.
Estimating alarm thresholds and the number of components in mixture distributions
NASA Astrophysics Data System (ADS)
Burr, Tom; Hamada, Michael S.
2012-09-01
Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.
NASA Astrophysics Data System (ADS)
Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.
2018-01-01
This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.
Probability distribution functions in turbulent convection
NASA Technical Reports Server (NTRS)
Balachandar, S.; Sirovich, L.
1991-01-01
Results of an extensive investigation of probability distribution functions (pdfs) for Rayleigh-Benard convection, in hard turbulence regime, are presented. It is shown that the pdfs exhibit a high degree of internal universality. In certain cases this universality is established within two Kolmogorov scales of a boundary. A discussion of the factors leading to the universality is presented.
Power-law tail probabilities of drainage areas in river basins
Veitzer, S.A.; Troutman, B.M.; Gupta, V.K.
2003-01-01
The significance of power-law tail probabilities of drainage areas in river basins was discussed. The convergence to a power law was not observed for all underlying distributions, but for a large class of statistical distributions with specific limiting properties. The article also discussed about the scaling properties of topologic and geometric network properties in river basins.
KINETICS OF LOW SOURCE REACTOR STARTUPS. PART II
DOE Office of Scientific and Technical Information (OSTI.GOV)
hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.
1962-06-01
A computational technique is described for computation of the probability distribution of power level for a low source reactor startup. The technique uses a mathematical model, for the time-dependent probability distribution of neutron and precursor concentration, having finite neutron lifetime, one group of delayed neutron precursors, and no spatial dependence. Results obtained by the technique are given. (auth)
Generating an Empirical Probability Distribution for the Andrews-Pregibon Statistic.
ERIC Educational Resources Information Center
Jarrell, Michele G.
A probability distribution was developed for the Andrews-Pregibon (AP) statistic. The statistic, developed by D. F. Andrews and D. Pregibon (1978), identifies multivariate outliers. It is a ratio of the determinant of the data matrix with an observation deleted to the determinant of the entire data matrix. Although the AP statistic has been used…
Animating Statistics: A New Kind of Applet for Exploring Probability Distributions
ERIC Educational Resources Information Center
Kahle, David
2014-01-01
In this article, I introduce a novel applet ("module") for exploring probability distributions, their samples, and various related statistical concepts. The module is primarily designed to be used by the instructor in the introductory course, but it can be used far beyond it as well. It is a free, cross-platform, stand-alone interactive…
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.