Positive phase space distributions and uncertainty relations
NASA Technical Reports Server (NTRS)
Kruger, Jan
1993-01-01
In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.
Probability of success for phase III after exploratory biomarker analysis in phase II.
Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver
2017-05-01
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.
Impact of temporal probability in 4D dose calculation for lung tumors.
Rouabhi, Ouided; Ma, Mingyu; Bayouth, John; Xia, Junyi
2015-11-08
The purpose of this study was to evaluate the dosimetric uncertainty in 4D dose calculation using three temporal probability distributions: uniform distribution, sinusoidal distribution, and patient-specific distribution derived from the patient respiratory trace. Temporal probability, defined as the fraction of time a patient spends in each respiratory amplitude, was evaluated in nine lung cancer patients. Four-dimensional computed tomography (4D CT), along with deformable image registration, was used to compute 4D dose incorporating the patient's respiratory motion. First, the dose of each of 10 phase CTs was computed using the same planning parameters as those used in 3D treatment planning based on the breath-hold CT. Next, deformable image registration was used to deform the dose of each phase CT to the breath-hold CT using the deformation map between the phase CT and the breath-hold CT. Finally, the 4D dose was computed by summing the deformed phase doses using their corresponding temporal probabilities. In this study, 4D dose calculated from the patient-specific temporal probability distribution was used as the ground truth. The dosimetric evaluation matrix included: 1) 3D gamma analysis, 2) mean tumor dose (MTD), 3) mean lung dose (MLD), and 4) lung V20. For seven out of nine patients, both uniform and sinusoidal temporal probability dose distributions were found to have an average gamma passing rate > 95% for both the lung and PTV regions. Compared with 4D dose calculated using the patient respiratory trace, doses using uniform and sinusoidal distribution showed a percentage difference on average of -0.1% ± 0.6% and -0.2% ± 0.4% in MTD, -0.2% ± 1.9% and -0.2% ± 1.3% in MLD, 0.09% ± 2.8% and -0.07% ± 1.8% in lung V20, -0.1% ± 2.0% and 0.08% ± 1.34% in lung V10, 0.47% ± 1.8% and 0.19% ± 1.3% in lung V5, respectively. We concluded that four-dimensional dose computed using either a uniform or sinusoidal temporal probability distribution can approximate four-dimensional dose computed using the patient-specific respiratory trace.
Quantum work in the Bohmian framework
NASA Astrophysics Data System (ADS)
Sampaio, R.; Suomela, S.; Ala-Nissila, T.; Anders, J.; Philbin, T. G.
2018-01-01
At nonzero temperature classical systems exhibit statistical fluctuations of thermodynamic quantities arising from the variation of the system's initial conditions and its interaction with the environment. The fluctuating work, for example, is characterized by the ensemble of system trajectories in phase space and, by including the probabilities for various trajectories to occur, a work distribution can be constructed. However, without phase-space trajectories, the task of constructing a work probability distribution in the quantum regime has proven elusive. Here we use quantum trajectories in phase space and define fluctuating work as power integrated along the trajectories, in complete analogy to classical statistical physics. The resulting work probability distribution is valid for any quantum evolution, including cases with coherences in the energy basis. We demonstrate the quantum work probability distribution and its properties with an exactly solvable example of a driven quantum harmonic oscillator. An important feature of the work distribution is its dependence on the initial statistical mixture of pure states, which is reflected in higher moments of the work. The proposed approach introduces a fundamentally different perspective on quantum thermodynamics, allowing full thermodynamic characterization of the dynamics of quantum systems, including the measurement process.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.
1991-01-01
The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.
A Gaussian measure of quantum phase noise
NASA Technical Reports Server (NTRS)
Schleich, Wolfgang P.; Dowling, Jonathan P.
1992-01-01
We study the width of the semiclassical phase distribution of a quantum state in its dependence on the average number of photons (m) in this state. As a measure of phase noise, we choose the width, delta phi, of the best Gaussian approximation to the dominant peak of this probability curve. For a coherent state, this width decreases with the square root of (m), whereas for a truncated phase state it decreases linearly with increasing (m). For an optimal phase state, delta phi decreases exponentially but so does the area caught underneath the peak: all the probability is stored in the broad wings of the distribution.
Multipartite entanglement characterization of a quantum phase transition
NASA Astrophysics Data System (ADS)
Costantini, G.; Facchi, P.; Florio, G.; Pascazio, S.
2007-07-01
A probability density characterization of multipartite entanglement is tested on the one-dimensional quantum Ising model in a transverse field. The average and second moment of the probability distribution are numerically shown to be good indicators of the quantum phase transition. We comment on multipartite entanglement generation at a quantum phase transition.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
Probing the statistics of transport in the Hénon Map
NASA Astrophysics Data System (ADS)
Alus, O.; Fishman, S.; Meiss, J. D.
2016-09-01
The phase space of an area-preserving map typically contains infinitely many elliptic islands embedded in a chaotic sea. Orbits near the boundary of a chaotic region have been observed to stick for long times, strongly influencing their transport properties. The boundary is composed of invariant "boundary circles." We briefly report recent results of the distribution of rotation numbers of boundary circles for the Hénon quadratic map and show that the probability of occurrence of small integer entries of their continued fraction expansions is larger than would be expected for a number chosen at random. However, large integer entries occur with probabilities distributed proportionally to the random case. The probability distributions of ratios of fluxes through island chains is reported as well. These island chains are neighbours in the sense of the Meiss-Ott Markov-tree model. Two distinct universality families are found. The distributions of the ratio between the flux and orbital period are also presented. All of these results have implications for models of transport in mixed phase space.
On the issues of probability distribution of GPS carrier phase observations
NASA Astrophysics Data System (ADS)
Luo, X.; Mayer, M.; Heck, B.
2009-04-01
In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
On the use of the energy probability distribution zeros in the study of phase transitions
NASA Astrophysics Data System (ADS)
Mól, L. A. S.; Rodrigues, R. G. M.; Stancioli, R. A.; Rocha, J. C. S.; Costa, B. V.
2018-04-01
This contribution is devoted to cover some technical aspects related to the use of the recently proposed energy probability distribution zeros in the study of phase transitions. This method is based on the partial knowledge of the partition function zeros and has been shown to be extremely efficient to precisely locate phase transition temperatures. It is based on an iterative method in such a way that the transition temperature can be approached at will. The iterative method will be detailed and some convergence issues that has been observed in its application to the 2D Ising model and to an artificial spin ice model will be shown, together with ways to circumvent them.
Misra, Anil; Spencer, Paulette; Marangos, Orestes; Wang, Yong; Katz, J. Lawrence
2005-01-01
A finite element (FE) model has been developed based upon the recently measured micro-scale morphological, chemical and mechanical properties of dentin–adhesive (d–a) interfaces using confocal Raman microspectroscopy and scanning acoustic microscopy (SAM). The results computed from this FE model indicated that the stress distributions and concentrations are affected by the micro-scale elastic properties of various phases composing the d–a interface. However, these computations were performed assuming isotropic material properties for the d–a interface. The d–a interface components, such as the peritubular and intertubular dentin, the partially demineralized dentin and the so-called ‘hybrid layer’ adhesive-collagen composite, are probably anisotropic. In this paper, the FE model is extended to account for the probable anisotropic properties of these d–a interface phases. A parametric study is performed to study the effect of anisotropy on the micromechanical stress distributions in the hybrid layer and the peritubular dentin phases of the d–a interface. It is found that the anisotropy of the phases affects the region and extent of stress concentration as well as the location of the maximum stress concentrations. Thus, the anisotropy of the phases could effect the probable location of failure initiation, whether in the peritubular region or in the hybrid layer. PMID:16849175
Augmenting Phase Space Quantization to Introduce Additional Physical Effects
NASA Astrophysics Data System (ADS)
Robbins, Matthew P. G.
Quantum mechanics can be done using classical phase space functions and a star product. The state of the system is described by a quasi-probability distribution. A classical system can be quantized in phase space in different ways with different quasi-probability distributions and star products. A transition differential operator relates different phase space quantizations. The objective of this thesis is to introduce additional physical effects into the process of quantization by using the transition operator. As prototypical examples, we first look at the coarse-graining of the Wigner function and the damped simple harmonic oscillator. By generalizing the transition operator and star product to also be functions of the position and momentum, we show that additional physical features beyond damping and coarse-graining can be introduced into a quantum system, including the generalized uncertainty principle of quantum gravity phenomenology, driving forces, and decoherence.
Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki
2018-01-01
Abstract DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure. PMID:29800455
Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki
2018-05-01
DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de
A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
NASA Astrophysics Data System (ADS)
Dã¡Vila, Alã¡N.; Escudero, Christian; López, Jorge, , Dr.
2004-10-01
Several methods have been developed in order to study phase transitions in nuclear fragmentation. The one used in this research is Percolation. This method allows us to adjust resulting data to heavy ion collisions experiments. In systems, such as atomic nuclei or molecules, energy is put into the system. The system's particles move away from each other until their links are broken. Some particles will still be linked. The fragments' distribution is found to be a power law. We are witnessing then a critical phenomenon. In our model the particles are represented as occupied spaces in a cubical array. Each particle has a bound to each one of its 6 neighbors. Each bound can be active if the two particles are linked or inactive if they are not. When two or more particles are linked, a fragment is formed. The probability for a specific link to be broken cannot be calculated, so the probability for a bound to be active is going to be used as parameter when trying to adjust the data. For a given probability p several arrays are generated. The fragments are counted. The fragments' distribution is then adjusted to a power law. The probability that generates the better fit is going to be the critical probability that indicates a phase transition. The better fit is found by seeking the fragments' distribution that gives the minimal chi squared when compared to a power law. As additional evidence of criticality the entropy and normalized variance of the mass are also calculated for each probability.
NASA Astrophysics Data System (ADS)
Hong, Wei; Huang, Dexiu; Zhang, Xinliang; Zhu, Guangxi
2008-01-01
A thorough simulation and evaluation of phase noise for optical amplification using semiconductor optical amplifier (SOA) is very important for predicting its performance in differential phase-shift keyed (DPSK) applications. In this paper, standard deviation and probability distribution of differential phase noise at the SOA output are obtained from the statistics of simulated differential phase noise. By using a full-wave model of SOA, the noise performance in the entire operation range can be investigated. It is shown that nonlinear phase noise substantially contributes to the total phase noise in case of a noisy signal amplified by a saturated SOA and the nonlinear contribution is larger with shorter SOA carrier lifetime. It is also shown that Gaussian distribution can be useful as a good approximation of the total differential phase noise statistics in the whole operation range. Power penalty due to differential phase noise is evaluated using a semi-analytical probability density function (PDF) of receiver noise. Obvious increase of power penalty at high signal input powers can be found for low input OSNR, which is due to both the large nonlinear differential phase noise and the dependence of BER vs. receiving power curvature on differential phase noise standard deviation.
Econophysics: Two-phase behaviour of financial markets
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Stanley, H. Eugene
2003-01-01
Buying and selling in financial markets is driven by demand, which can be quantified by the imbalance in the number of shares transacted by buyers and sellers over a given time interval. Here we analyse the probability distribution of demand, conditioned on its local noise intensity Σ, and discover the surprising existence of a critical threshold, Σc. For Σ < Σc, the most probable value of demand is roughly zero; we interpret this as an equilibrium phase in which neither buying nor selling predominates. For Σ > Σc, two most probable values emerge that are symmetrical around zero demand, corresponding to excess demand and excess supply; we interpret this as an out-of-equilibrium phase in which the market behaviour is mainly buying for half of the time, and mainly selling for the other half.
Fragment size distribution in viscous bag breakup of a drop
NASA Astrophysics Data System (ADS)
Kulkarni, Varun; Bulusu, Kartik V.; Plesniak, Michael W.; Sojka, Paul E.
2015-11-01
In this study we examine the drop size distribution resulting from the fragmentation of a single drop in the presence of a continuous air jet. Specifically, we study the effect of Weber number, We, and Ohnesorge number, Oh on the disintegration process. The regime of breakup considered is observed between 12 <= We <= 16 for Oh <= 0.1. Experiments are conducted using phase Doppler anemometry. Both the number and volume fragment size probability distributions are plotted. The volume probability distribution revealed a bi-modal behavior with two distinct peaks: one corresponding to the rim fragments and the other to the bag fragments. This behavior was suppressed in the number probability distribution. Additionally, we employ an in-house particle detection code to isolate the rim fragment size distribution from the total probability distributions. Our experiments showed that the bag fragments are smaller in diameter and larger in number, while the rim fragments are larger in diameter and smaller in number. Furthermore, with increasing We for a given Ohwe observe a large number of small-diameter drops and small number of large-diameter drops. On the other hand, with increasing Oh for a fixed We the opposite is seen.
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Meng-Zheng; School of Physics and Electronic Information, Huaibei Normal University, Huaibei 235000; Ye, Liu, E-mail: yeliu@ahu.edu.cn
An efficient scheme is proposed to implement phase-covariant quantum cloning by using a superconducting transmon qubit coupled to a microwave cavity resonator in the strong dispersive limit of circuit quantum electrodynamics (QED). By solving the master equation numerically, we plot the Wigner function and Poisson distribution of the cavity mode after each operation in the cloning transformation sequence according to two logic circuits proposed. The visualizations of the quasi-probability distribution in phase-space for the cavity mode and the occupation probability distribution in the Fock basis enable us to penetrate the evolution process of cavity mode during the phase-covariant cloning (PCC)more » transformation. With the help of numerical simulation method, we find out that the present cloning machine is not the isotropic model because its output fidelity depends on the polar angle and the azimuthal angle of the initial input state on the Bloch sphere. The fidelity for the actual output clone of the present scheme is slightly smaller than one in the theoretical case. The simulation results are consistent with the theoretical ones. This further corroborates our scheme based on circuit QED can implement efficiently PCC transformation.« less
We investigated the bulk electrical conductivity and microbial population distribution in sediments at a site contaminated with light non-aqueous phase liquid (LNAPL). The bulk conductivity was measured using in situ vertical resistivity probes, while the most probable number met...
Bayesian probability of success for clinical trials using historical data
Ibrahim, Joseph G.; Chen, Ming-Hui; Lakshminarayanan, Mani; Liu, Guanghan F.; Heyse, Joseph F.
2015-01-01
Developing sophisticated statistical methods for go/no-go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang-Stein’s work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre-market or post-market trials. PMID:25339499
Bayesian probability of success for clinical trials using historical data.
Ibrahim, Joseph G; Chen, Ming-Hui; Lakshminarayanan, Mani; Liu, Guanghan F; Heyse, Joseph F
2015-01-30
Developing sophisticated statistical methods for go/no-go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang-Stein's work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre-market or post-market trials. Copyright © 2014 John Wiley & Sons, Ltd.
The role of community structure on the nature of explosive synchronization.
Lotfi, Nastaran; Rodrigues, Francisco A; Darooneh, Amir Hossein
2018-03-01
In this paper, we analyze explosive synchronization in networks with a community structure. The results of our study indicate that the mesoscopic structure of the networks could affect the synchronization of coupled oscillators. With the variation of three parameters, the degree probability distribution exponent, the community size probability distribution exponent, and the mixing parameter, we could have a fast or slow phase transition. Besides, in some cases, we could have communities which are synchronized inside but not with other communities and vice versa. We also show that there is a limit in these mesoscopic structures which suppresses the transition from the second-order phase transition and results in explosive synchronization. This could be considered as a tuning parameter changing the transition of the system from the second order to the first order.
The probability distribution model of air pollution index and its dominants in Kuala Lumpur
NASA Astrophysics Data System (ADS)
AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah
2016-11-01
This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.
NASA Astrophysics Data System (ADS)
Ertaş, Mehmet; Keskin, Mustafa
2015-03-01
By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume-Emery-Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.
Two statistical mechanics aspects of complex networks
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Biely, Christoly
2006-12-01
By adopting an ensemble interpretation of non-growing rewiring networks, network theory can be reduced to a counting problem of possible network states and an identification of their associated probabilities. We present two scenarios of how different rewirement schemes can be used to control the state probabilities of the system. In particular, we review how by generalizing the linking rules of random graphs, in combination with superstatistics and quantum mechanical concepts, one can establish an exact relation between the degree distribution of any given network and the nodes’ linking probability distributions. In a second approach, we control state probabilities by a network Hamiltonian, whose characteristics are motivated by biological and socio-economical statistical systems. We demonstrate that a thermodynamics of networks becomes a fully consistent concept, allowing to study e.g. ‘phase transitions’ and computing entropies through thermodynamic relations.
Analyzing phenological extreme events over the past five decades in Germany
NASA Astrophysics Data System (ADS)
Schleip, Christoph; Menzel, Annette; Estrella, Nicole; Graeser, Philipp
2010-05-01
As climate change may alter the frequency and intensity of extreme temperatures, we analysed whether warming of the last 5 decades has already changed the statistics of phenological extreme events. In this context, two extreme value statistical concepts are discussed and applied to existing phenological datasets of German Weather Service (DWD) in order to derive probabilities of occurrence for extreme early or late phenological events. We analyse four phenological groups; "begin of flowering, "leaf foliation", "fruit ripening" and "leaf colouring" as well as DWD indicator phases of the "phenological year". Additionally we put an emphasis on a between-species analysis; a comparison of differences in extreme onsets between three common northern conifers. Furthermore we conducted a within-species analysis with different phases of horse chestnut throughout a year. The first statistical approach fits data to a Gaussian model using traditional statistical techniques, and then analyses the extreme quantile. The key point of this approach is the adoption of an appropriate probability density function (PDF) to the observed data and the assessment of the PDF parameters change in time. The full analytical description in terms of the estimated PDF for defined time steps of the observation period allows probability assessments of extreme values for e.g. annual or decadal time steps. Related with this approach is the possibility of counting out the onsets which fall in our defined extreme percentiles. The estimation of the probability of extreme events on the basis of the whole data set is in contrast to analyses with the generalized extreme value distribution (GEV). The second approach deals with the extreme PDFs itself and fits the GEV distribution to annual minima of phenological series to provide useful estimates about return levels. For flowering and leaf unfolding phases exceptionally early extremes are seen since the mid 1980s and especially for the single years 1961, 1990 and 2007 whereas exceptionally extreme late events are seen in the year 1970. Summer phases such as fruit ripening exhibit stronger shifts to early extremes than spring phases. Leaf colouring phases reveal increasing probability for late extremes. The with GEV estimated 100-year event of Picea, Pinus and Larix amount to extreme early events of about -27, -31.48 and -32.79 days, respectively. If we assume non-stationary minimum data we get a more extreme 100-year event of about -35.40 for Picea but associated with wider confidence intervals. The GEV is simply another probability distribution but for purposes of extreme analysis in phenology it should be considered as equally important as (if not more important than) the Gaussian PDF approach.
Does the central limit theorem always apply to phase noise? Some implications for radar problems
NASA Astrophysics Data System (ADS)
Gray, John E.; Addison, Stephen R.
2017-05-01
The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.
Task specificity of attention training: the case of probability cuing
Jiang, Yuhong V.; Swallow, Khena M.; Won, Bo-Yeong; Cistera, Julia D.; Rosenbaum, Gail M.
2014-01-01
Statistical regularities in our environment enhance perception and modulate the allocation of spatial attention. Surprisingly little is known about how learning-induced changes in spatial attention transfer across tasks. In this study, we investigated whether a spatial attentional bias learned in one task transfers to another. Most of the experiments began with a training phase in which a search target was more likely to be located in one quadrant of the screen than in the other quadrants. An attentional bias toward the high-probability quadrant developed during training (probability cuing). In a subsequent, testing phase, the target's location distribution became random. In addition, the training and testing phases were based on different tasks. Probability cuing did not transfer between visual search and a foraging-like task. However, it did transfer between various types of visual search tasks that differed in stimuli and difficulty. These data suggest that different visual search tasks share a common and transferrable learned attentional bias. However, this bias is not shared by high-level, decision-making tasks such as foraging. PMID:25113853
Phase walk analysis of leptokurtic time series.
Schreiber, Korbinian; Modest, Heike I; Räth, Christoph
2018-06-01
The Fourier phase information play a key role for the quantified description of nonlinear data. We present a novel tool for time series analysis that identifies nonlinearities by sensitively detecting correlations among the Fourier phases. The method, being called phase walk analysis, is based on well established measures from random walk analysis, which are now applied to the unwrapped Fourier phases of time series. We provide an analytical description of its functionality and demonstrate its capabilities on systematically controlled leptokurtic noise. Hereby, we investigate the properties of leptokurtic time series and their influence on the Fourier phases of time series. The phase walk analysis is applied to measured and simulated intermittent time series, whose probability density distribution is approximated by power laws. We use the day-to-day returns of the Dow-Jones industrial average, a synthetic time series with tailored nonlinearities mimicing the power law behavior of the Dow-Jones and the acceleration of the wind at an Atlantic offshore site. Testing for nonlinearities by means of surrogates shows that the new method yields strong significances for nonlinear behavior. Due to the drastically decreased computing time as compared to embedding space methods, the number of surrogate realizations can be increased by orders of magnitude. Thereby, the probability distribution of the test statistics can very accurately be derived and parameterized, which allows for much more precise tests on nonlinearities.
Phase relations in iron-rich systems and implications for the earth's core
NASA Technical Reports Server (NTRS)
Anderson, William W.; Svendsen, Bob; Ahrens, Thomas J.
1987-01-01
Recent experimental data concerning the properties of iron, iron sulfide, and iron oxide at high pressures are combined with theoretical arguments to constrain the probable behavior of the Fe-rich portions of the Fe-O and Fe-S phase diagrams. Phase diagrams are constructed for the Fe-S-O system at core pressures and temperatures. These properties are used to evaluate the current temperature distribution and composition of the core.
NASA Astrophysics Data System (ADS)
Hong, Wei; Huang, Dexiu; Zhang, Xinliang; Zhu, Guangxi
2007-11-01
A thorough simulation and evaluation of phase noise for optical amplification using semiconductor optical amplifier (SOA) is very important for predicting its performance in differential phase shift keyed (DPSK) applications. In this paper, standard deviation and probability distribution of differential phase noise are obtained from the statistics of simulated differential phase noise. By using a full-wave model of SOA, the noise performance in the entire operation range can be investigated. It is shown that nonlinear phase noise substantially contributes to the total phase noise in case of a noisy signal amplified by a saturated SOA and the nonlinear contribution is larger with shorter SOA carrier lifetime. Power penalty due to differential phase noise is evaluated using a semi-analytical probability density function (PDF) of receiver noise. Obvious increase of power penalty at high signal input powers can be found for low input OSNR, which is due to both the large nonlinear differential phase noise and the dependence of BER vs. receiving power curvature on differential phase noise standard deviation.
Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states
NASA Astrophysics Data System (ADS)
de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.
2015-12-01
Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.
Characterization of Cloud Water-Content Distribution
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2010-01-01
The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.
Multivariate η-μ fading distribution with arbitrary correlation model
NASA Astrophysics Data System (ADS)
Ghareeb, Ibrahim; Atiani, Amani
2018-03-01
An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.
A random walk rule for phase I clinical trials.
Durham, S D; Flournoy, N; Rosenberger, W F
1997-06-01
We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.
The investigation of the lateral interaction effect's on traffic flow behavior under open boundaries
NASA Astrophysics Data System (ADS)
Bouadi, M.; Jetto, K.; Benyoussef, A.; El Kenz, A.
2017-11-01
In this paper, an open boundaries traffic flow system is studied by taking into account the lateral interaction with spatial defects. For a random defects distribution, if the vehicles velocities are weakly correlated, the traffic phases can be predicted by considering the corresponding inflow and outflow functions. Conversely, if the vehicles velocities are strongly correlated, a phase segregation appears inside the system's bulk which induces the maximum current appearance. Such velocity correlation depends mainly on the defects densities and the probabilities of lateral deceleration. However, for a compact defects distribution, the traffic phases are predictable by using the inflow in the system beginning, the inflow entering the defects zone and the outflow function.
On estimating the phase of periodic waveform in additive Gaussian noise, part 2
NASA Astrophysics Data System (ADS)
Rauch, L. L.
1984-11-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1984-01-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Characterization of the Ionospheric Scintillations at High Latitude using GPS Signal
NASA Astrophysics Data System (ADS)
Mezaoui, H.; Hamza, A. M.; Jayachandran, P. T.
2013-12-01
Transionospheric radio signals experience both amplitude and phase variations as a result of propagation through a turbulent ionosphere; this phenomenon is known as ionospheric scintillations. As a result of these fluctuations, Global Positioning System (GPS) receivers lose track of signals and consequently induce position and navigational errors. Therefore, there is a need to study these scintillations and their causes in order to not only resolve the navigational problem but in addition develop analytical and numerical radio propagation models. In order to quantify and qualify these scintillations, we analyze the probability distribution functions (PDFs) of L1 GPS signals at 50 Hz sampling rate using the Canadian High arctic Ionospheric Network (CHAIN) measurements. The raw GPS signal is detrended using a wavelet-based technique and the detrended amplitude and phase of the signal are used to construct probability distribution functions (PDFs) of the scintillating signal. The resulting PDFs are non-Gaussian. From the PDF functional fits, the moments are estimated. The results reveal a general non-trivial parabolic relationship between the normalized fourth and third moments for both the phase and amplitude of the signal. The calculated higher-order moments of the amplitude and phase distribution functions will help quantify some of the scintillation characteristics and in the process provide a base for forecasting, i.e. develop a scintillation climatology model. This statistical analysis, including power spectra, along with a numerical simulation will constitute the backbone of a high latitude scintillation model.
NASA Technical Reports Server (NTRS)
Han, D.; Kim, Y. S.; Noz, Marilyn E.
1989-01-01
It is possible to calculate expectation values and transition probabilities from the Wigner phase-space distribution function. Based on the canonical transformation properties of the Wigner function, an algorithm is developed for calculating these quantities in quantum optics for coherent and squeezed states. It is shown that the expectation value of a dynamical variable can be written in terms of its vacuum expectation value of the canonically transformed variable. Parallel-axis theorems are established for the photon number and its variant. It is also shown that the transition probability between two squeezed states can be reduced to that of the transition from one squeezed state to vacuum.
An efficient distribution method for nonlinear transport problems in stochastic porous media
NASA Astrophysics Data System (ADS)
Ibrahima, F.; Tchelepi, H.; Meyer, D. W.
2015-12-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.
Aftershock Energy Distribution by Statistical Mechanics Approach
NASA Astrophysics Data System (ADS)
Daminelli, R.; Marcellini, A.
2015-12-01
The aim of our work is to research the most probable distribution of the energy of aftershocks. We started by applying one of the fundamental principles of statistical mechanics that, in case of aftershock sequences, it could be expressed as: the greater the number of different ways in which the energy of aftershocks can be arranged among the energy cells in phase space the more probable the distribution. We assume that each cell in phase space has the same possibility to be occupied, and that more than one cell in the phase space can have the same energy. Seeing that seismic energy is proportional to products of different parameters, a number of different combinations of parameters can produce different energies (e.g., different combination of stress drop and fault area can release the same seismic energy). Let us assume that there are gi cells in the aftershock phase space characterised by the same energy released ɛi. Therefore we can assume that the Maxwell-Boltzmann statistics can be applied to aftershock sequences with the proviso that the judgment on the validity of this hypothesis is the agreement with the data. The aftershock energy distribution can therefore be written as follow: n(ɛ)=Ag(ɛ)exp(-βɛ)where n(ɛ) is the number of aftershocks with energy, ɛ, A and β are constants. Considering the above hypothesis, we can assume g(ɛ) is proportional to ɛ. We selected and analysed different aftershock sequences (data extracted from Earthquake Catalogs of SCEC, of INGV-CNT and other institutions) with a minimum magnitude retained ML=2 (in some cases ML=2.6) and a time window of 35 days. The results of our model are in agreement with the data, except in the very low energy band, where our model resulted in a moderate overestimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less
Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Errington, Jeffrey R.
2003-06-01
An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.
NASA Astrophysics Data System (ADS)
da Silva, Roberto
2018-06-01
This work explores the features of a graph generated by agents that hop from one node to another node, where the nodes have evolutionary attractiveness. The jumps are governed by Boltzmann-like transition probabilities that depend both on the euclidean distance between the nodes and on the ratio (β) of the attractiveness between them. It is shown that persistent nodes, i.e., nodes that never been reached by this special random walk are possible in the stationary limit differently from the case where the attractiveness is fixed and equal to one for all nodes (β = 1). Simultaneously, one also investigates the spectral properties and statistics related to the attractiveness and degree distribution of the evolutionary network. Finally, a study of the crossover between persistent phase and no persistent phase was performed and it was also observed the existence of a special type of transition probability which leads to a power law behaviour for the time evolution of the persistence.
DOT National Transportation Integrated Search
2006-01-01
The project focuses on two major issues - the improvement of current work zone design practices and an analysis of : vehicle interarrival time (IAT) and speed distributions for the development of a digital computer simulation model for : queues and t...
Two coupled, driven Ising spin systems working as an engine.
Basu, Debarshi; Nandi, Joydip; Jayannavar, A M; Marathe, Rahul
2017-05-01
Miniaturized heat engines constitute a fascinating field of current research. Many theoretical and experimental studies are being conducted that involve colloidal particles in harmonic traps as well as bacterial baths acting like thermal baths. These systems are micron-sized and are subjected to large thermal fluctuations. Hence, for these systems average thermodynamic quantities, such as work done, heat exchanged, and efficiency, lose meaning unless otherwise supported by their full probability distributions. Earlier studies on microengines are concerned with applying Carnot or Stirling engine protocols to miniaturized systems, where system undergoes typical two isothermal and two adiabatic changes. Unlike these models we study a prototype system of two classical Ising spins driven by time-dependent, phase-different, external magnetic fields. These spins are simultaneously in contact with two heat reservoirs at different temperatures for the full duration of the driving protocol. Performance of the model as an engine or a refrigerator depends only on a single parameter, namely the phase between two external drivings. We study this system in terms of fluctuations in efficiency and coefficient of performance (COP). We find full distributions of these quantities numerically and study the tails of these distributions. We also study reliability of the engine. We find the fluctuations dominate mean values of efficiency and COP, and their probability distributions are broad with power law tails.
Two coupled, driven Ising spin systems working as an engine
NASA Astrophysics Data System (ADS)
Basu, Debarshi; Nandi, Joydip; Jayannavar, A. M.; Marathe, Rahul
2017-05-01
Miniaturized heat engines constitute a fascinating field of current research. Many theoretical and experimental studies are being conducted that involve colloidal particles in harmonic traps as well as bacterial baths acting like thermal baths. These systems are micron-sized and are subjected to large thermal fluctuations. Hence, for these systems average thermodynamic quantities, such as work done, heat exchanged, and efficiency, lose meaning unless otherwise supported by their full probability distributions. Earlier studies on microengines are concerned with applying Carnot or Stirling engine protocols to miniaturized systems, where system undergoes typical two isothermal and two adiabatic changes. Unlike these models we study a prototype system of two classical Ising spins driven by time-dependent, phase-different, external magnetic fields. These spins are simultaneously in contact with two heat reservoirs at different temperatures for the full duration of the driving protocol. Performance of the model as an engine or a refrigerator depends only on a single parameter, namely the phase between two external drivings. We study this system in terms of fluctuations in efficiency and coefficient of performance (COP). We find full distributions of these quantities numerically and study the tails of these distributions. We also study reliability of the engine. We find the fluctuations dominate mean values of efficiency and COP, and their probability distributions are broad with power law tails.
Calculation of a fluctuating entropic force by phase space sampling.
Waters, James T; Kim, Harold D
2015-07-01
A polymer chain pinned in space exerts a fluctuating force on the pin point in thermal equilibrium. The average of such fluctuating force is well understood from statistical mechanics as an entropic force, but little is known about the underlying force distribution. Here, we introduce two phase space sampling methods that can produce the equilibrium distribution of instantaneous forces exerted by a terminally pinned polymer. In these methods, both the positions and momenta of mass points representing a freely jointed chain are perturbed in accordance with the spatial constraints and the Boltzmann distribution of total energy. The constraint force for each conformation and momentum is calculated using Lagrangian dynamics. Using terminally pinned chains in space and on a surface, we show that the force distribution is highly asymmetric with both tensile and compressive forces. Most importantly, the mean of the distribution, which is equal to the entropic force, is not the most probable force even for long chains. Our work provides insights into the mechanistic origin of entropic forces, and an efficient computational tool for unbiased sampling of the phase space of a constrained system.
Space shuttle solid rocket booster recovery system definition, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.
Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field
NASA Astrophysics Data System (ADS)
Borelli, M. E. S.; Carneiro, C. E. I.
1996-02-01
We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derrida, B.; Spohn, H.
We show that the problem of a directed polymer on a tree with disorder can be reduced to the study of nonlinear equations of reaction-diffusion type. These equations admit traveling wave solutions that move at all possible speeds above a certain minimal speed. The speed of the wavefront is the free energy of the polymer problem and the minimal speed corresponds to a phase transition to a glassy phase similar to the spin-glass phase. Several properties of the polymer problem can be extracted from the correspondence with the traveling wave: probability distribution of the free energy, overlaps, etc.
Koneff, M.D.; Royle, J. Andrew; Forsell, D.J.; Wortham, J.S.; Boomer, G.S.; Perry, M.C.
2005-01-01
Survey design for wintering scoters (Melanitta sp.) and other sea ducks that occur in offshore waters is challenging because these species have large ranges, are subject to distributional shifts among years and within a season, and can occur in aggregations. Interest in winter sea duck population abundance surveys has grown in recent years. This interest stems from concern over the population status of some sea ducks, limitations of extant breeding waterfowl survey programs in North America and logistical challenges and costs of conducting surveys in northern breeding regions, high winter area philopatry in some species and potential conservation implications, and increasing concern over offshore development and other threats to sea duck wintering habitats. The efficiency and practicality of statistically-rigorous monitoring strategies for mobile, aggregated wintering sea duck populations have not been sufficiently investigated. This study evaluated a 2-phase adaptive stratified strip transect sampling plan to estimate wintering population size of scoters, long-tailed ducks (Clangua hyemalis), and other sea ducks and provide information on distribution. The sampling plan results in an optimal allocation of a fixed sampling effort among offshore strata in the U.S. mid-Atlantic coast region. Phase I transect selection probabilities were based on historic distribution and abundance data, while Phase 2 selection probabilities were based on observations made during Phase 1 flights. Distance sampling methods were used to estimate detection rates. Environmental variables thought to affect detection rates were recorded during the survey and post-stratification and covariate modeling were investigated to reduce the effect of heterogeneity on detection estimation. We assessed cost-precision tradeoffs under a number of fixed-cost sampling scenarios using Monte Carlo simulation. We discuss advantages and limitations of this sampling design for estimating wintering sea duck abundance and mapping distribution and suggest improvements for future surveys.
Han, Ruisong; Yang, Wei; Wang, Yipeng; You, Kaiming
2017-05-01
Clustering is an effective technique used to reduce energy consumption and extend the lifetime of wireless sensor network (WSN). The characteristic of energy heterogeneity of WSNs should be considered when designing clustering protocols. We propose and evaluate a novel distributed energy-efficient clustering protocol called DCE for heterogeneous wireless sensor networks, based on a Double-phase Cluster-head Election scheme. In DCE, the procedure of cluster head election is divided into two phases. In the first phase, tentative cluster heads are elected with the probabilities which are decided by the relative levels of initial and residual energy. Then, in the second phase, the tentative cluster heads are replaced by their cluster members to form the final set of cluster heads if any member in their cluster has more residual energy. Employing two phases for cluster-head election ensures that the nodes with more energy have a higher chance to be cluster heads. Energy consumption is well-distributed in the proposed protocol, and the simulation results show that DCE achieves longer stability periods than other typical clustering protocols in heterogeneous scenarios.
Dynamics of social contagions with local trend imitation.
Zhu, Xuzhen; Wang, Wei; Cai, Shimin; Stanley, H Eugene
2018-05-09
Research on social contagion dynamics has not yet included a theoretical analysis of the ubiquitous local trend imitation (LTI) characteristic. We propose a social contagion model with a tent-like adoption probability to investigate the effect of this LTI characteristic on behavior spreading. We also propose a generalized edge-based compartmental theory to describe the proposed model. Through extensive numerical simulations and theoretical analyses, we find a crossover in the phase transition: when the LTI capacity is strong, the growth of the final adoption size exhibits a second-order phase transition. When the LTI capacity is weak, we see a first-order phase transition. For a given behavioral information transmission probability, there is an optimal LTI capacity that maximizes the final adoption size. Finally we find that the above phenomena are not qualitatively affected by the heterogeneous degree distribution. Our suggested theoretical predictions agree with the simulation results.
1983-03-08
tlh repow ) !Unclassified lie. DECLASSI FICATION/ DOWNGRADING SCHEDULE 16. DISTRIBUTION STATEMENT ( of this Report) Distribution Unlimited, Approved for...a block copolymer can sometimes be transformed into a homogeneous, disordered structure. The tem- perature of the transition depends on the degree of ...probably that the morphology is gradually transformed from spherical to cylindrical and eventually to lamellar packing. There is, however, no evidence of
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Phase transition of social learning collectives and the echo chamber.
Mori, Shintaro; Nakayama, Kazuaki; Hisakado, Masato
2016-11-01
We study a simple model for social learning agents in a restless multiarmed bandit. There are N agents, and the bandit has M good arms that change to bad with the probability q_{c}/N. If the agents do not know a good arm, they look for it by a random search (with the success probability q_{I}) or copy the information of other agents' good arms (with the success probability q_{O}) with probabilities 1-p or p, respectively. The distribution of the agents in M good arms obeys the Yule distribution with the power-law exponent 1+γ in the limit N,M→∞, and γ=1+(1-p)q_{I}/pq_{O}. The system shows a phase transition at p_{c}=q_{I}/q_{I}+q_{o}. For p
Duarte Queirós, Sílvio M; Crokidakis, Nuno; Soares-Pinto, Diogo O
2009-07-01
The influence of the tail features of the local magnetic field probability density function (PDF) on the ferromagnetic Ising model is studied in the limit of infinite range interactions. Specifically, we assign a quenched random field whose value is in accordance with a generic distribution that bears platykurtic and leptokurtic distributions depending on a single parameter tau<3 to each site. For tau<5/3, such distributions, which are basically Student-t and r distribution extended for all plausible real degrees of freedom, present a finite standard deviation, if not the distribution has got the same asymptotic power-law behavior as a alpha-stable Lévy distribution with alpha=(3-tau)/(tau-1). For every value of tau, at specific temperature and width of the distribution, the system undergoes a continuous phase transition. Strikingly, we impart the emergence of an inflexion point in the temperature-PDF width phase diagrams for distributions broader than the Cauchy-Lorentz (tau=2) which is accompanied with a divergent free energy per spin (at zero temperature).
Detection of digital FSK using a phase-locked loop
NASA Technical Reports Server (NTRS)
Lindsey, W. C.; Simon, M. K.
1975-01-01
A theory is presented for the design of a digital FSK receiver which employs a phase-locked loop to set up the desired matched filter as the arriving signal frequency switches. The developed mathematical model makes it possible to establish the error probability performance of systems which employ a class of digital FM modulations. The noise mechanism which accounts for decision errors is modeled on the basis of the Meyr distribution and renewal Markov process theory.
Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.
Krishnamurthy, V; Krishnamurthy, E V
1999-03-01
A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.
Bimodality emerges from transport model calculations of heavy ion collisions at intermediate energy
NASA Astrophysics Data System (ADS)
Mallik, S.; Das Gupta, S.; Chaudhuri, G.
2016-04-01
This work is a continuation of our effort [S. Mallik, S. Das Gupta, and G. Chaudhuri, Phys. Rev. C 91, 034616 (2015)], 10.1103/PhysRevC.91.034616 to examine if signatures of a phase transition can be extracted from transport model calculations of heavy ion collisions at intermediate energy. A signature of first-order phase transition is the appearance of a bimodal distribution in Pm(k ) in finite systems. Here Pm(k ) is the probability that the maximum of the multiplicity distribution occurs at mass number k . Using a well-known model for event generation [Botzmann-Uehling-Uhlenbeck (BUU) plus fluctuation], we study two cases of central collision: mass 40 on mass 40 and mass 120 on mass 120. Bimodality is seen in both the cases. The results are quite similar to those obtained in statistical model calculations. An intriguing feature is seen. We observe that at the energy where bimodality occurs, other phase-transition-like signatures appear. There are breaks in certain first-order derivatives. We then examine if such breaks appear in standard BUU calculations without fluctuations. They do. The implication is interesting. If first-order phase transition occurs, it may be possible to recognize that from ordinary BUU calculations. Probably the reason this has not been seen already is because this aspect was not investigated before.
Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers
NASA Astrophysics Data System (ADS)
Samiei-Esfahany, Sami; Hanssen, Ramon F.
2012-01-01
The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.
Occupation times and ergodicity breaking in biased continuous time random walks
NASA Astrophysics Data System (ADS)
Bel, Golan; Barkai, Eli
2005-12-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
Lin, Guoxing
2016-11-21
Anomalous diffusion exists widely in polymer and biological systems. Pulsed-field gradient (PFG) techniques have been increasingly used to study anomalous diffusion in nuclear magnetic resonance and magnetic resonance imaging. However, the interpretation of PFG anomalous diffusion is complicated. Moreover, the exact signal attenuation expression including the finite gradient pulse width effect has not been obtained based on fractional derivatives for PFG anomalous diffusion. In this paper, a new method, a Mainardi-Luchko-Pagnini (MLP) phase distribution approximation, is proposed to describe PFG fractional diffusion. MLP phase distribution is a non-Gaussian phase distribution. From the fractional derivative model, both the probability density function (PDF) of a spin in real space and the PDF of the spin's accumulating phase shift in virtual phase space are MLP distributions. The MLP phase distribution leads to a Mittag-Leffler function based PFG signal attenuation, which differs significantly from the exponential attenuation for normal diffusion and from the stretched exponential attenuation for fractional diffusion based on the fractal derivative model. A complete signal attenuation expression E α (-D f b α,β * ) including the finite gradient pulse width effect was obtained and it can handle all three types of PFG fractional diffusions. The result was also extended in a straightforward way to give a signal attenuation expression of fractional diffusion in PFG intramolecular multiple quantum coherence experiments, which has an n β dependence upon the order of coherence which is different from the familiar n 2 dependence in normal diffusion. The results obtained in this study are in agreement with the results from the literature. The results in this paper provide a set of new, convenient approximation formalisms to interpret complex PFG fractional diffusion experiments.
NASA Astrophysics Data System (ADS)
Simon, Damien
2011-03-01
The probability distribution of the current in the asymmetric simple exclusion process is expected to undergo a phase transition in the regime of weak asymmetry of the jumping rates. This transition was first predicted by Bodineau and Derrida using a linear stability analysis of the hydrodynamical limit of the process and further arguments have been given by Mallick and Prolhac. However it has been impossible so far to study what happens after the transition. The present paper presents an analysis of the large deviation function of the current on both sides of the transition from a Bethe Ansatz approach of the weak asymmetry regime of the exclusion process.
Devi, Suma Priya Sudarsana; Howe, James R.
2016-01-01
Key points Purkinje cells of the cerebellum receive ∼180,000 parallel fibre synapses, which have often been viewed as a homogeneous synaptic population and studied using single action potentials.Many parallel fibre synapses might be silent, however, and granule cells in vivo fire in bursts. Here, we used trains of stimuli to study parallel fibre inputs to Purkinje cells in rat cerebellar slices.Analysis of train EPSCs revealed two synaptic components, phase 1 and 2. Phase 1 is initially large and saturates rapidly, whereas phase 2 is initially small and facilitates throughout the train. The two components have a heterogeneous distribution at dendritic sites and different pharmacological profiles.The differential sensitivity of phase 1 and phase 2 to inhibition by pentobarbital and NBQX mirrors the differential sensitivity of AMPA receptors associated with the transmembrane AMPA receptor regulatory protein, γ‐2, gating in the low‐ and high‐open probability modes, respectively. Abstract Cerebellar granule cells fire in bursts, and their parallel fibre axons (PFs) form ∼180,000 excitatory synapses onto the dendritic tree of a Purkinje cell. As many as 85% of these synapses have been proposed to be silent, but most are labelled for AMPA receptors. Here, we studied PF to Purkinje cell synapses using trains of 100 Hz stimulation in rat cerebellar slices. The PF train EPSC consisted of two components that were present in variable proportions at different dendritic sites: one, with large initial EPSC amplitude, saturated after three stimuli and dominated the early phase of the train EPSC; and the other, with small initial amplitude, increased steadily throughout the train of 10 stimuli and dominated the late phase of the train EPSC. The two phases also displayed different pharmacological profiles. Phase 2 was less sensitive to inhibition by NBQX but more sensitive to block by pentobarbital than phase 1. Comparison of synaptic results with fast glutamate applications to recombinant receptors suggests that the high‐open‐probability gating mode of AMPA receptors containing the auxiliary subunit transmembrane AMPA receptor regulatory protein γ‐2 makes a substantial contribution to phase 2. We argue that the two synaptic components arise from AMPA receptors with different functional signatures and synaptic distributions. Comparisons of voltage‐ and current‐clamp responses obtained from the same Purkinje cells indicate that phase 1 of the EPSC arises from synapses ideally suited to transmit short bursts of action potentials, whereas phase 2 is likely to arise from low‐release‐probability or ‘silent’ synapses that are recruited during longer bursts. PMID:27094216
Phase transitions in community detection: A solvable toy model
NASA Astrophysics Data System (ADS)
Ver Steeg, Greg; Moore, Cristopher; Galstyan, Aram; Allahverdyan, Armen
2014-05-01
Recently, it was shown that there is a phase transition in the community detection problem. This transition was first computed using the cavity method, and has been proved rigorously in the case of q = 2 groups. However, analytic calculations using the cavity method are challenging since they require us to understand probability distributions of messages. We study analogous transitions in the so-called “zero-temperature inference” model, where this distribution is supported only on the most likely messages. Furthermore, whenever several messages are equally likely, we break the tie by choosing among them with equal probability, corresponding to an infinitesimal random external field. While the resulting analysis overestimates the thresholds, it reproduces some of the qualitative features of the system. It predicts a first-order detectability transition whenever q > 2 (as opposed to q > 4 according to the finite-temperature cavity method). It also has a regime analogous to the “hard but detectable” phase, where the community structure can be recovered, but only when the initial messages are sufficiently accurate. Finally, we study a semisupervised setting where we are given the correct labels for a fraction ρ of the nodes. For q > 2, we find a regime where the accuracy jumps discontinuously at a critical value of ρ.
Critical behavior in earthquake energy dissipation
NASA Astrophysics Data System (ADS)
Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro
2017-09-01
We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.
Li, Xin; Li, Ye
2015-01-01
Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.
Damle, Kedar; Majumdar, Satya N; Tripathi, Vikram; Vivo, Pierpaolo
2011-10-21
We compute analytically the full distribution of Andreev conductance G(NS) of a metal-superconductor interface with a large number N(c) of transverse modes, using a random matrix approach. The probability distribution P(G(NS),N(c) in the limit of large N(c) displays a Gaussian behavior near the average value
NASA Astrophysics Data System (ADS)
Yeung, Chuck
2018-06-01
The assumption that the local order parameter is related to an underlying spatially smooth auxiliary field, u (r ⃗,t ) , is a common feature in theoretical approaches to non-conserved order parameter phase separation dynamics. In particular, the ansatz that u (r ⃗,t ) is a Gaussian random field leads to predictions for the decay of the autocorrelation function which are consistent with observations, but distinct from predictions using alternative theoretical approaches. In this paper, the auxiliary field is obtained directly from simulations of the time-dependent Ginzburg-Landau equation in two and three dimensions. The results show that u (r ⃗,t ) is equivalent to the distance to the nearest interface. In two dimensions, the probability distribution, P (u ) , is well approximated as Gaussian except for small values of u /L (t ) , where L (t ) is the characteristic length-scale of the patterns. The behavior of P (u ) in three dimensions is more complicated; the non-Gaussian region for small u /L (t ) is much larger than that in two dimensions but the tails of P (u ) begin to approach a Gaussian form at intermediate times. However, at later times, the tails of the probability distribution appear to decay faster than a Gaussian distribution.
System for Measuring Conditional Amplitude, Phase, or Time Distributions of Pulsating Phenomena
Van Brunt, Richard J.; Cernyar, Eric W.
1992-01-01
A detailed description is given of an electronic stochastic analyzer for use with direct “real-time” measurements of the conditional distributions needed for a complete stochastic characterization of pulsating phenomena that can be represented as random point processes. The measurement system described here is designed to reveal and quantify effects of pulse-to-pulse or phase-to-phase memory propagation. The unraveling of memory effects is required so that the physical basis for observed statistical properties of pulsating phenomena can be understood. The individual unique circuit components that comprise the system and the combinations of these components for various measurements, are thoroughly documented. The system has been applied to the measurement of pulsating partial discharges generated by applying alternating or constant voltage to a discharge gap. Examples are shown of data obtained for conditional and unconditional amplitude, time interval, and phase-of-occurrence distributions of partial-discharge pulses. The results unequivocally show the existence of significant memory effects as indicated, for example, by the observations that the most probable amplitudes and phases-of-occurrence of discharge pulses depend on the amplitudes and/or phases of the preceding pulses. Sources of error and fundamental limitations of the present measurement approach are analyzed. Possible extensions of the method are also discussed. PMID:28053450
Quantum illumination for enhanced detection of Rayleigh-fading targets
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.
NASA Astrophysics Data System (ADS)
Sakaguchi, Hidetsugu; Kadowaki, Shuntaro
2017-07-01
We study slowly pulling block-spring models in random media. Second-order phase transitions exist in a model pulled by a constant force in the case of velocity-strengthening friction. If external forces are slowly increased, nearly critical states are self-organized. Slips of various sizes occur, and the probability distributions of slip size roughly obey power laws. The exponent is close to that in the quenched Edwards-Wilkinson model. Furthermore, the slip-size distributions are investigated in cases of Coulomb friction, velocity-weakening friction, and two-dimensional block-spring models.
Second look at the spread of epidemics on networks
NASA Astrophysics Data System (ADS)
Kenah, Eben; Robins, James M.
2007-09-01
In an important paper, Newman [Phys. Rev. E66, 016128 (2002)] claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semidirected random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In the Appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.
Role of conviction in nonequilibrium models of opinion formation
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Anteneodo, Celia
2012-12-01
We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).
Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics
NASA Astrophysics Data System (ADS)
van Lith, Janneke
The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of abundance of rare and patchily distributed species and is particularly appropriate when sampling in all patches is impossible, but a global estimate of abundance is required.
Optical techniques to feed and control GaAs MMIC modules for phased array antenna applications
NASA Astrophysics Data System (ADS)
Bhasin, K. B.; Anzic, G.; Kunath, R. R.; Connolly, D. J.
A complex signal distribution system is required to feed and control GaAs monolithic microwave integrated circuits (MMICs) for phased array antenna applications above 20 GHz. Each MMIC module will require one or more RF lines, one or more bias voltage lines, and digital lines to provide a minimum of 10 bits of combined phase and gain control information. In a closely spaced array, the routing of these multiple lines presents difficult topology problems as well as a high probability of signal interference. To overcome GaAs MMIC phased array signal distribution problems optical fibers interconnected to monolithically integrated optical components with GaAs MMIC array elements are proposed as a solution. System architecture considerations using optical fibers are described. The analog and digital optical links to respectively feed and control MMIC elements are analyzed. It is concluded that a fiber optic network will reduce weight and complexity, and increase reliability and performance, but higher power will be required.
Optical techniques to feed and control GaAs MMIC modules for phased array antenna applications
NASA Technical Reports Server (NTRS)
Bhasin, K. B.; Anzic, G.; Kunath, R. R.; Connolly, D. J.
1986-01-01
A complex signal distribution system is required to feed and control GaAs monolithic microwave integrated circuits (MMICs) for phased array antenna applications above 20 GHz. Each MMIC module will require one or more RF lines, one or more bias voltage lines, and digital lines to provide a minimum of 10 bits of combined phase and gain control information. In a closely spaced array, the routing of these multiple lines presents difficult topology problems as well as a high probability of signal interference. To overcome GaAs MMIC phased array signal distribution problems optical fibers interconnected to monolithically integrated optical components with GaAs MMIC array elements are proposed as a solution. System architecture considerations using optical fibers are described. The analog and digital optical links to respectively feed and control MMIC elements are analyzed. It is concluded that a fiber optic network will reduce weight and complexity, and increase reliability and performance, but higher power will be required.
Solid oxide fuel cell anode image segmentation based on a novel quantum-inspired fuzzy clustering
NASA Astrophysics Data System (ADS)
Fu, Xiaowei; Xiang, Yuhan; Chen, Li; Xu, Xin; Li, Xi
2015-12-01
High quality microstructure modeling can optimize the design of fuel cells. For three-phase accurate identification of Solid Oxide Fuel Cell (SOFC) microstructure, this paper proposes a novel image segmentation method on YSZ/Ni anode Optical Microscopic (OM) images. According to Quantum Signal Processing (QSP), the proposed approach exploits a quantum-inspired adaptive fuzziness factor to adaptively estimate the energy function in the fuzzy system based on Markov Random Filed (MRF). Before defuzzification, a quantum-inspired probability distribution based on distance and gray correction is proposed, which can adaptively adjust the inaccurate probability estimation of uncertain points caused by noises and edge points. In this study, the proposed method improves accuracy and effectiveness of three-phase identification on the micro-investigation. It provides firm foundation to investigate the microstructural evolution and its related properties.
Boost-phase discrimination research
NASA Technical Reports Server (NTRS)
Langhoff, Stephen R.; Feiereisen, William J.
1993-01-01
The final report describes the combined work of the Computational Chemistry and Aerothermodynamics branches within the Thermosciences Division at NASA Ames Research Center directed at understanding the signatures of shock-heated air. Considerable progress was made in determining accurate transition probabilities for the important band systems of NO that account for much of the emission in the ultraviolet region. Research carried out under this project showed that in order to reproduce the observed radiation from the bow shock region of missiles in their boost phase it is necessary to include the Burnett terms in the constituent equation, account for the non-Boltzmann energy distribution, correctly model the NO formation and rotational excitation process, and use accurate transition probabilities for the NO band systems. This work resulted in significant improvements in the computer code NEQAIR that models both the radiation and fluid dynamics in the shock region.
Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions
NASA Astrophysics Data System (ADS)
Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia
2018-03-01
Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.
Computer Simulation Results for the Two-Point Probability Function of Composite Media
NASA Astrophysics Data System (ADS)
Smith, P.; Torquato, S.
1988-05-01
Computer simulation results are reported for the two-point matrix probability function S2 of two-phase random media composed of disks distributed with an arbitrary degree of impenetrability λ. The novel technique employed to sample S2( r) (which gives the probability of finding the endpoints of a line segment of length r in the matrix) is very accurate and has a fast execution time. Results for the limiting cases λ = 0 (fully penetrable disks) and λ = 1 (hard disks), respectively, compare very favorably with theoretical predictions made by Torquato and Beasley and by Torquato and Lado. Results are also reported for several values of λ. that lie between these two extremes: cases which heretofore have not been examined.
NASA Astrophysics Data System (ADS)
Jiang, Shi-Mei; Cai, Shi-Min; Zhou, Tao; Zhou, Pei-Ling
2008-06-01
The two-phase behaviour in financial markets actually means the bifurcation phenomenon, which represents the change of the conditional probability from an unimodal to a bimodal distribution. We investigate the bifurcation phenomenon in Hang-Seng index. It is observed that the bifurcation phenomenon in financial index is not universal, but specific under certain conditions. For Hang-Seng index and randomly generated time series, the phenomenon just emerges when the power-law exponent of absolute increment distribution is between 1 and 2 with appropriate period. Simulations on a randomly generated time series suggest the bifurcation phenomenon itself is subject to the statistics of absolute increment, thus it may not be able to reflect essential financial behaviours. However, even under the same distribution of absolute increment, the range where bifurcation phenomenon occurs is far different from real market to artificial data, which may reflect certain market information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greb, Arthur; Niemi, Kari; O'Connell, Deborah
2013-12-09
Plasma parameters and dynamics in capacitively coupled oxygen plasmas are investigated for different surface conditions. Metastable species concentration, electronegativity, spatial distribution of particle densities as well as the ionization dynamics are significantly influenced by the surface loss probability of metastable singlet delta oxygen (SDO). Simulated surface conditions are compared to experiments in the plasma-surface interface region using phase resolved optical emission spectroscopy. It is demonstrated how in-situ measurements of excitation features can be used to determine SDO surface loss probabilities for different surface materials.
Exact joint density-current probability function for the asymmetric exclusion process.
Depken, Martin; Stinchcombe, Robin
2004-07-23
We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society
Test of quantum thermalization in the two-dimensional transverse-field Ising model
Blaß, Benjamin; Rieger, Heiko
2016-01-01
We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems. PMID:27905523
Phase transition in nonuniform Josephson arrays: Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lozovik, Yu. E.; Pomirchy, L. M.
1994-01-01
Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.
Toward the Probabilistic Forecasting of High-latitude GPS Phase Scintillation
NASA Technical Reports Server (NTRS)
Prikryl, P.; Jayachandran, P.T.; Mushini, S. C.; Richardson, I. G.
2012-01-01
The phase scintillation index was obtained from L1 GPS data collected with the Canadian High Arctic Ionospheric Network (CHAIN) during years of extended solar minimum 2008-2010. Phase scintillation occurs predominantly on the dayside in the cusp and in the nightside auroral oval. We set forth a probabilistic forecast method of phase scintillation in the cusp based on the arrival time of either solar wind corotating interaction regions (CIRs) or interplanetary coronal mass ejections (ICMEs). CIRs on the leading edge of high-speed streams (HSS) from coronal holes are known to cause recurrent geomagnetic and ionospheric disturbances that can be forecast one or several solar rotations in advance. Superposed epoch analysis of phase scintillation occurrence showed a sharp increase in scintillation occurrence just after the arrival of high-speed solar wind and a peak associated with weak to moderate CMEs during the solar minimum. Cumulative probability distribution functions for the phase scintillation occurrence in the cusp are obtained from statistical data for days before and after CIR and ICME arrivals. The probability curves are also specified for low and high (below and above median) values of various solar wind plasma parameters. The initial results are used to demonstrate a forecasting technique on two example periods of CIRs and ICMEs.
NASA Technical Reports Server (NTRS)
Smith, O. E.; Adelfang, S. I.
1998-01-01
The wind profile with all of its variations with respect to altitude has been, is now, and will continue to be important for aerospace vehicle design and operations. Wind profile databases and models are used for the vehicle ascent flight design for structural wind loading, flight control systems, performance analysis, and launch operations. This report presents the evolution of wind statistics and wind models from the empirical scalar wind profile model established for the Saturn Program through the development of the vector wind profile model used for the Space Shuttle design to the variations of this wind modeling concept for the X-33 program. Because wind is a vector quantity, the vector wind models use the rigorous mathematical probability properties of the multivariate normal probability distribution. When the vehicle ascent steering commands (ascent guidance) are wind biased to the wind profile measured on the day-of-launch, ascent structural wind loads are reduced and launch probability is increased. This wind load alleviation technique is recommended in the initial phase of vehicle development. The vehicle must fly through the largest load allowable versus altitude to achieve its mission. The Gumbel extreme value probability distribution is used to obtain the probability of exceeding (or not exceeding) the load allowable. The time conditional probability function is derived from the Gumbel bivariate extreme value distribution. This time conditional function is used for calculation of wind loads persistence increments using 3.5-hour Jimsphere wind pairs. These increments are used to protect the commit-to-launch decision. Other topics presented include the Shuttle Shuttle load-response to smoothed wind profiles, a new gust model, and advancements in wind profile measuring systems. From the lessons learned and knowledge gained from past vehicle programs, the development of future launch vehicles can be accelerated. However, new vehicle programs by their very nature will require specialized support for new databases and analyses for wind, atmospheric parameters (pressure, temperature, and density versus altitude), and weather. It is for this reason that project managers are encouraged to collaborate with natural environment specialists early in the conceptual design phase. Such action will give the lead time necessary to meet the natural environment design and operational requirements, and thus, reduce development costs.
NASA Technical Reports Server (NTRS)
Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina
2004-01-01
A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.
Significance of stress transfer in time-dependent earthquake probability calculations
Parsons, T.
2005-01-01
A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.
NASA Astrophysics Data System (ADS)
Wable, Pawan S.; Jha, Madan K.
2018-02-01
The effects of rainfall and the El Niño Southern Oscillation (ENSO) on groundwater in a semi-arid basin of India were analyzed using Archimedean copulas considering 17 years of data for monsoon rainfall, post-monsoon groundwater level (PMGL) and ENSO Index. The evaluated dependence among these hydro-climatic variables revealed that PMGL-Rainfall and PMGL-ENSO Index pairs have significant dependence. Hence, these pairs were used for modeling dependence by employing four types of Archimedean copulas: Ali-Mikhail-Haq, Clayton, Gumbel-Hougaard, and Frank. For the copula modeling, the results of probability distributions fitting to these hydro-climatic variables indicated that the PMGL and rainfall time series are best represented by Weibull and lognormal distributions, respectively, while the non-parametric kernel-based normal distribution is the most suitable for the ENSO Index. Further, the PMGL-Rainfall pair is best modeled by the Clayton copula, and the PMGL-ENSO Index pair is best modeled by the Frank copula. The Clayton copula-based conditional probability of PMGL being less than or equal to its average value at a given mean rainfall is above 70% for 33% of the study area. In contrast, the spatial variation of the Frank copula-based probability of PMGL being less than or equal to its average value is 35-40% in 23% of the study area during El Niño phase, while it is below 15% in 35% of the area during the La Niña phase. This copula-based methodology can be applied under data-scarce conditions for exploring the impacts of rainfall and ENSO on groundwater at basin scales.
Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles.
Bhatnagar, Akshay; Gustavsson, K; Mitra, Dhrubaditya
2018-02-01
We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component V_{R} for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D_{2}. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011)PLEEE81539-375510.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014)1468-524810.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) |V_{R}|≪R, where the distribution depends solely on R, and (2) |V_{R}|≫R, where the distribution is a function of |V_{R}| alone. The probability distributions in these two regimes are matched along a straight line: |V_{R}|=z^{*}R. Our simulations confirm that this is indeed correct. We further obtain D_{2} and z^{*} as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.
Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles
NASA Astrophysics Data System (ADS)
Bhatnagar, Akshay; Gustavsson, K.; Mitra, Dhrubaditya
2018-02-01
We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component VR for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D2. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011), 10.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014), 10.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) | VR|≪R , where the distribution depends solely on R , and (2) | VR|≫R , where the distribution is a function of | VR| alone. The probability distributions in these two regimes are matched along a straight line: | VR|= z*R . Our simulations confirm that this is indeed correct. We further obtain D2 and z* as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.
Major and trace element chemistry of separated fragments from a hibonite-bearing Allende inclusion
NASA Technical Reports Server (NTRS)
Davis, A. M.; Grossman, L.; Allen, J. M.
1978-01-01
The major and trace elements of separated fragments and a bulk sample from CG-11, a hibonite-bearing inclusion in the Allende meteorite, were analyzed. Major element abundances were used to determine the minerology of separated fragments. The high degree of correlation between Eu/Sm ratios and Lu/Yb ratios for the samples studied indicates that their rare earth element (REE) distributions are governed by two components. One, Lu-, Eu-rich, is probably hibonite; the other, depleted in these elements, seems to be associated with the secondary alteration phases, grossular, nepheline and anorthite. The REE distribution in CG-11 precludes melting events after formation of the secondary alteration phases, but a melting event involving the primary minerals cannot be excluded. The enrichment of Lu with respect to other measured REE in hibonite can be explained by present REE condensation models. Two Hf-bearing components, most likely hibonite and perovskite, are necessary to account for variations in Sc/Hf ratios in the fragments studied. The lithophile volatiles Na, Mn, Fe, Zn, and probably Cr increase in the same order as the amount of secondary alteration minerals; the volatile siderophile elements Co and Au, however, do not.
Modeling and impacts of the latent heat of phase change and specific heat for phase change materials
NASA Astrophysics Data System (ADS)
Scoggin, J.; Khan, R. S.; Silva, H.; Gokirmak, A.
2018-05-01
We model the latent heats of crystallization and fusion in phase change materials with a unified latent heat of phase change, ensuring energy conservation by coupling the heat of phase change with amorphous and crystalline specific heats. We demonstrate the model with 2-D finite element simulations of Ge2Sb2Te5 and find that the heat of phase change increases local temperature up to 180 K in 300 nm × 300 nm structures during crystallization, significantly impacting grain distributions. We also show in electrothermal simulations of 45 nm confined and 10 nm mushroom cells that the higher amorphous specific heat predicted by this model increases nucleation probability at the end of reset operations. These nuclei can decrease set time, leading to variability, as demonstrated for the mushroom cell.
Quantum mechanics on phase space: The hydrogen atom and its Wigner functions
NASA Astrophysics Data System (ADS)
Campos, P.; Martins, M. G. R.; Fernandes, M. C. B.; Vianna, J. D. M.
2018-03-01
Symplectic quantum mechanics (SQM) considers a non-commutative algebra of functions on a phase space Γ and an associated Hilbert space HΓ, to construct a unitary representation for the Galilei group. From this unitary representation the Schrödinger equation is rewritten in phase space variables and the Wigner function can be derived without the use of the Liouville-von Neumann equation. In this article the Coulomb potential in three dimensions (3D) is resolved completely by using the phase space Schrödinger equation. The Kustaanheimo-Stiefel(KS) transformation is applied and the Coulomb and harmonic oscillator potentials are connected. In this context we determine the energy levels, the amplitude of probability in phase space and correspondent Wigner quasi-distribution functions of the 3D-hydrogen atom described by Schrödinger equation in phase space.
NASA Astrophysics Data System (ADS)
César Mansur Filho, Júlio; Dickman, Ronald
2011-05-01
We study symmetric sleepy random walkers, a model exhibiting an absorbing-state phase transition in the conserved directed percolation (CDP) universality class. Unlike most examples of this class studied previously, this model possesses a continuously variable control parameter, facilitating analysis of critical properties. We study the model using two complementary approaches: analysis of the numerically exact quasistationary (QS) probability distribution on rings of up to 22 sites, and Monte Carlo simulation of systems of up to 32 000 sites. The resulting estimates for critical exponents β, \\beta /\
Chord-length and free-path distribution functions for many-body systems
NASA Astrophysics Data System (ADS)
Lu, Binglin; Torquato, S.
1993-04-01
We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.
Uncertainty and Surprise: Ideas from the Open Discussion
NASA Astrophysics Data System (ADS)
Jordan, Michelle E.
Approximately one hundred participants met for three days at a conference entitled "Uncertainty and Surprise: Questions on Working with the Unexpected and Unknowable." There were a diversity of conference participants ranging from researchers in the natural sciences and researchers in the social sciences (business professors, physicists, ethnographers, nursing school deans) to practitioners and executives in public policy and management (business owners, health care managers, high tech executives), all of whom had varying levels of experience and expertise in dealing with uncertainty and surprise. One group held the traditional, statistical view that uncertainty comes from variance and events that are described by usually unimodal probability law. A second group was comfortable on the one hand with phase diagrams and the phase transitions that come from systems with multi-modal distributions, and on the other hand, with deterministic chaos. A third group was comfortable with the emergent events from evolutionary processes that may not have any probability laws at all.
Quantum Walks on the Line with Phase Parameters
NASA Astrophysics Data System (ADS)
Villagra, Marcos; Nakanishi, Masaki; Yamashita, Shigeru; Nakashima, Yasuhiko
In this paper, a study on discrete-time coined quantum walks on the line is presented. Clear mathematical foundations are still lacking for this quantum walk model. As a step toward this objective, the following question is being addressed: Given a graph, what is the probability that a quantum walk arrives at a given vertex after some number of steps? This is a very natural question, and for random walks it can be answered by several different combinatorial arguments. For quantum walks this is a highly non-trivial task. Furthermore, this was only achieved before for one specific coin operator (Hadamard operator) for walks on the line. Even considering only walks on lines, generalizing these computations to a general SU(2) coin operator is a complex task. The main contribution is a closed-form formula for the amplitudes of the state of the walk (which includes the question above) for a general symmetric SU(2) operator for walks on the line. To this end, a coin operator with parameters that alters the phase of the state of the walk is defined. Then, closed-form solutions are computed by means of Fourier analysis and asymptotic approximation methods. We also present some basic properties of the walk which can be deducted using weak convergence theorems for quantum walks. In particular, the support of the induced probability distribution of the walk is calculated. Then, it is shown how changing the parameters in the coin operator affects the resulting probability distribution.
Optical Correlation Techniques In Fluid Dynamics
NASA Astrophysics Data System (ADS)
Schatzel, K.; Schulz-DuBois, E. O.; Vehrenkamp, R.
1981-05-01
Three flow measurement techniques make use of fast digital correlators. (1) Most widely spread is photon correlation velocimetry using crossed laser beams and detecting Doppler shifted light scattered by small particles in the flow. Depending on the processing of the photon correlogram, this technique yields mean velocity, turbulence level, or even the detailed probability distribution of one velocity component. An improved data processing scheme is demonstrated on laminar vortex flow in a curved channel. (2) Rate correlation based upon threshold crossings of a high pass filtered laser Doppler signal can he used to obtain velocity correlation functions. The most powerful setup developed in our laboratory uses a phase locked loop type tracker and a multibit correlator to analyse time-dependent Taylor vortex flow. With two optical systems and trackers, crosscorrelation functions reveal phase relations between different vortices. (3) Making use of refractive index fluctuations (e. g. in two phase flows) instead of scattering particles, interferometry with bidirectional fringe counting and digital correlation and probability analysis constitute a new quantitative technique related to classical Schlieren methods. Measurements on a mixing flow of heated and cold air contribute new ideas to the theory of turbulent random phase screens.
Optical correlation techniques in fluid dynamics
NASA Astrophysics Data System (ADS)
Schätzel, K.; Schulz-Dubois, E. O.; Vehrenkamp, R.
1981-04-01
Three flow measurement techniques make use of fast digital correlators. The most widely spread is photon correlation velocimetry using crossed laser beams, and detecting Doppler shifted light scattered by small particles in the flow. Depending on the processing of the photon correlation output, this technique yields mean velocity, turbulence level, and even the detailed probability distribution of one velocity component. An improved data processing scheme is demonstrated on laminar vortex flow in a curved channel. In the second method, rate correlation based upon threshold crossings of a high pass filtered laser Doppler signal can be used to obtain velocity correlation functions. The most powerful set-up developed in our laboratory uses a phase locked loop type tracker and a multibit correlator to analyze time-dependent Taylor vortex flow. With two optical systems and trackers, cross-correlation functions reveal phase relations between different vortices. The last method makes use of refractive index fluctuations (eg in two phase flows) instead of scattering particles. Interferometry with bidirectional counting, and digital correlation and probability analysis, constitutes a new quantitative technique related to classical Schlieren methods. Measurements on a mixing flow of heated and cold air contribute new ideas to the theory of turbulent random phase screens.
Rényi entropy of the totally asymmetric exclusion process
NASA Astrophysics Data System (ADS)
Wood, Anthony J.; Blythe, Richard A.; Evans, Martin R.
2017-11-01
The Rényi entropy is a generalisation of the Shannon entropy that is sensitive to the fine details of a probability distribution. We present results for the Rényi entropy of the totally asymmetric exclusion process (TASEP). We calculate explicitly an entropy whereby the squares of configuration probabilities are summed, using the matrix product formalism to map the problem to one involving a six direction lattice walk in the upper quarter plane. We derive the generating function across the whole phase diagram, using an obstinate kernel method. This gives the leading behaviour of the Rényi entropy and corrections in all phases of the TASEP. The leading behaviour is given by the result for a Bernoulli measure and we conjecture that this holds for all Rényi entropies. Within the maximal current phase the correction to the leading behaviour is logarithmic in the system size. Finally, we remark upon a special property of equilibrium systems whereby discontinuities in the Rényi entropy arise away from phase transitions, which we refer to as secondary transitions. We find no such secondary transition for this nonequilibrium system, supporting the notion that these are specific to equilibrium cases.
Statistical physics of medical diagnostics: Study of a probabilistic model.
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Statistical physics of medical diagnostics: Study of a probabilistic model
NASA Astrophysics Data System (ADS)
Mashaghi, Alireza; Ramezanpour, Abolfazl
2018-03-01
We study a diagnostic strategy which is based on the anticipation of the diagnostic process by simulation of the dynamical process starting from the initial findings. We show that such a strategy could result in more accurate diagnoses compared to a strategy that is solely based on the direct implications of the initial observations. We demonstrate this by employing the mean-field approximation of statistical physics to compute the posterior disease probabilities for a given subset of observed signs (symptoms) in a probabilistic model of signs and diseases. A Monte Carlo optimization algorithm is then used to maximize an objective function of the sequence of observations, which favors the more decisive observations resulting in more polarized disease probabilities. We see how the observed signs change the nature of the macroscopic (Gibbs) states of the sign and disease probability distributions. The structure of these macroscopic states in the configuration space of the variables affects the quality of any approximate inference algorithm (so the diagnostic performance) which tries to estimate the sign-disease marginal probabilities. In particular, we find that the simulation (or extrapolation) of the diagnostic process is helpful when the disease landscape is not trivial and the system undergoes a phase transition to an ordered phase.
Crime and punishment: the economic burden of impunity
NASA Astrophysics Data System (ADS)
Gordon, M. B.; Iglesias, J. R.; Semeshenko, V.; Nadal, J. P.
2009-03-01
Crime is an economically relevant activity. It may represent a mechanism of wealth distribution but also a social and economic burden because of the interference with regular legal activities and the cost of the law enforcement system. Sometimes it may be less costly for the society to allow for some level of criminality. However, a drawback of such a policy is that it may lead to a high increase of criminal activity, that may become hard to reduce later on. Here we investigate the level of law enforcement required to keep crime within acceptable limits. A sharp phase transition is observed as a function of the probability of punishment. We also analyze other consequences of criminality as the growth of the economy, the inequality in the wealth distribution (the Gini coefficient) and other relevant quantities under different scenarios of criminal activity and probabilities of apprehension.
Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei
2016-11-10
Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
Statistical thermodynamics of clustered populations.
Matsoukas, Themis
2014-08-01
We present a thermodynamic theory for a generic population of M individuals distributed into N groups (clusters). We construct the ensemble of all distributions with fixed M and N, introduce a selection functional that embodies the physics that governs the population, and obtain the distribution that emerges in the scaling limit as the most probable among all distributions consistent with the given physics. We develop the thermodynamics of the ensemble and establish a rigorous mapping to regular thermodynamics. We treat the emergence of a so-called giant component as a formal phase transition and show that the criteria for its emergence are entirely analogous to the equilibrium conditions in molecular systems. We demonstrate the theory by an analytic model and confirm the predictions by Monte Carlo simulation.
Spatial distribution of nuclei in progressive nucleation: Modeling and application
NASA Astrophysics Data System (ADS)
Tomellini, Massimo
2018-04-01
Phase transformations ruled by non-simultaneous nucleation and growth do not lead to random distribution of nuclei. Since nucleation is only allowed in the untransformed portion of space, positions of nuclei are correlated. In this article an analytical approach is presented for computing pair-correlation function of nuclei in progressive nucleation. This quantity is further employed for characterizing the spatial distribution of nuclei through the nearest neighbor distribution function. The modeling is developed for nucleation in 2D space with power growth law and it is applied to describe electrochemical nucleation where correlation effects are significant. Comparison with both computer simulations and experimental data lends support to the model which gives insights into the transition from Poissonian to correlated nearest neighbor probability density.
DOT National Transportation Integrated Search
2006-01-01
Problem: Work zones on heavily traveled divided highways present problems to motorists in the form of traffic delays and increased accident risks due to sometimes reduced motorist guidance, dense traffic, and other driving difficulties. To minimize t...
Frank, Stefan; Roberts, Daniel E; Rikvold, Per Arne
2005-02-08
The influence of nearest-neighbor diffusion on the decay of a metastable low-coverage phase (monolayer adsorption) in a square lattice-gas model of electrochemical metal deposition is investigated by kinetic Monte Carlo simulations. The phase-transformation dynamics are compared to the well-established Kolmogorov-Johnson-Mehl-Avrami theory. The phase transformation is accelerated by diffusion, but remains in accord with the theory for continuous nucleation up to moderate diffusion rates. At very high diffusion rates the phase-transformation kinetic shows a crossover to instantaneous nucleation. Then, the probability of medium-sized clusters is reduced in favor of large clusters. Upon reversal of the supersaturation, the adsorbate desorbs, but large clusters still tend to grow during the initial stages of desorption. Calculation of the free energy of subcritical clusters by enumeration of lattice animals yields a quasiequilibrium distribution which is in reasonable agreement with the simulation results. This is an improvement relative to classical droplet theory, which fails to describe the distributions, since the macroscopic surface tension is a bad approximation for small clusters.
Quantum rotor model for a Bose-Einstein condensate of dipolar molecules.
Armaitis, J; Duine, R A; Stoof, H T C
2013-11-22
We show that a Bose-Einstein condensate of heteronuclear molecules in the regime of small and static electric fields is described by a quantum rotor model for the macroscopic electric dipole moment of the molecular gas cloud. We solve this model exactly and find the symmetric, i.e., rotationally invariant, and dipolar phases expected from the single-molecule problem, but also an axial and planar nematic phase due to many-body effects. Investigation of the wave function of the macroscopic dipole moment also reveals squeezing of the probability distribution for the angular momentum of the molecules.
Signal Processing Design of Low Probability of Intercept Waveforms via Intersymbol Dither
2008-03-01
filter output asynchronously. 3.3.1 Basic Receiver. This section considers the receiver shown in Fig- ure 3.4. Note that the matched filter output is...0 0.5 1 Q ua dr at ur e In−phase s1s2 s3 s4 Figure 4.3: 4-ary DPSK Constellation Table 4.1: 4-ary DPSK Gray code mapping Word Phase Shift, ∆θ 00 0 01...both the real and imaginary parts of each noise samples following an independent Gaussian distribution. The Marsaglia ziggurat algorithm in Matlabr is
Chaotic oscillations and noise transformations in a simple dissipative system with delayed feedback
NASA Astrophysics Data System (ADS)
Zverev, V. V.; Rubinstein, B. Ya.
1991-04-01
We analyze the statistical behavior of signals in nonlinear circuits with delayed feedback in the presence of external Markovian noise. For the special class of circuits with intense phase mixing we develop an approach for the computation of the probability distributions and multitime correlation functions based on the random phase approximation. Both Gaussian and Kubo-Andersen models of external noise statistics are analyzed and the existence of the stationary (asymptotic) random process in the long-time limit is shown. We demonstrate that a nonlinear system with chaotic behavior becomes a noise amplifier with specific statistical transformation properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L. G.
The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.
Margolin, L. G.
2018-03-19
The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.
Finite-size scaling for discontinuous nonequilibrium phase transitions
NASA Astrophysics Data System (ADS)
de Oliveira, Marcelo M.; da Luz, M. G. E.; Fiore, Carlos E.
2018-06-01
A finite-size scaling theory, originally developed only for transitions to absorbing states [Phys. Rev. E 92, 062126 (2015), 10.1103/PhysRevE.92.062126], is extended to distinct sorts of discontinuous nonequilibrium phase transitions. Expressions for quantities such as response functions, reduced cumulants, and equal area probability distributions are derived from phenomenological arguments. Irrespective of system details, all these quantities scale with the volume, establishing the dependence on size. The approach generality is illustrated through the analysis of different models. The present results are a relevant step in trying to unify the scaling behavior description of nonequilibrium transition processes.
Heterogeneity-induced large deviations in activity and (in some cases) entropy production
NASA Astrophysics Data System (ADS)
Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.
2014-10-01
We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.
Annealed scaling for a charged polymer in dimensions two and higher
NASA Astrophysics Data System (ADS)
Berger, Q.; den Hollander, F.; Poisat, J.
2018-02-01
This paper considers an undirected polymer chain on {Z}d , d ≥slant 2 , with i.i.d. random charges attached to its constituent monomers. Each self-intersection of the polymer chain contributes an energy to the interaction Hamiltonian that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The object of interest is the annealed free energy per monomer in the limit as the length n of the polymer chain tends to infinity. We show that there is a critical curve in the parameter plane spanned by the charge bias and the inverse temperature separating an extended phase from a collapsed phase. We derive the scaling of the critical curve for small and for large charge bias and the scaling of the annealed free energy for small inverse temperature. We argue that in the collapsed phase the polymer chain is subdiffusive, namely, on scale \
Equivalence principle for quantum systems: dephasing and phase shift of free-falling particles
NASA Astrophysics Data System (ADS)
Anastopoulos, C.; Hu, B. L.
2018-02-01
We ask the question of how the (weak) equivalence principle established in classical gravitational physics should be reformulated and interpreted for massive quantum objects that may also have internal degrees of freedom (dof). This inquiry is necessary because even elementary concepts like a classical trajectory are not well defined in quantum physics—trajectories originating from quantum histories become viable entities only under stringent decoherence conditions. From this investigation we posit two logically and operationally distinct statements of the equivalence principle for quantum systems. Version A: the probability distribution of position for a free-falling particle is the same as the probability distribution of a free particle, modulo a mass-independent shift of its mean. Version B: any two particles with the same velocity wave-function behave identically in free fall, irrespective of their masses. Both statements apply to all quantum states, including those without a classical correspondence, and also for composite particles with quantum internal dof. We also investigate the consequences of the interaction between internal and external dof induced by free fall. For a class of initial states, we find dephasing occurs for the translational dof, namely, the suppression of the off-diagonal terms of the density matrix, in the position basis. We also find a gravitational phase shift in the reduced density matrix of the internal dof that does not depend on the particle’s mass. For classical states, the phase shift has a natural classical interpretation in terms of gravitational red-shift and special relativistic time-dilation.
Exact Extremal Statistics in the Classical 1D Coulomb Gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2017-08-01
We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.
Hydrodynamics of the Polyakov line in SU(N c) Yang-Mills
Liu, Yizhuang; Warchoł, Piotr; Zahed, Ismail
2015-12-08
We discuss a hydrodynamical description of the eigenvalues of the Polyakov line at large but finite N c for Yang-Mills theory in even and odd space-time dimensions. The hydro-static solutions for the eigenvalue densities are shown to interpolate between a uniform distribution in the confined phase and a localized distribution in the de-confined phase. The resulting critical temperatures are in overall agreement with those measured on the lattice over a broad range of N c, and are consistent with the string model results at N c = ∞. The stochastic relaxation of the eigenvalues of the Polyakov line out ofmore » equilibrium is captured by a hydrodynamical instanton. An estimate of the probability of formation of a Z(N c)bubble using a piece-wise sound wave is suggested.« less
Uranium concentration and distribution in six peridotite inclusions of probable mantle origin
NASA Technical Reports Server (NTRS)
Haines, E. L.; Zartman, R. E.
1973-01-01
Fission-track activation was used to investigate uranium concentration and distribution in peridotite inclusions in alkali basalt from six localities. Whole-rock uranium concentrations range from 24 to 82 ng/g. Most of the uranium is uniformly distributed in the major silicate phases - olivine, orthopyroxene, and clinopyroxene. Chromian spinels may be classified into two groups on the basis of their uranium content - those which have less than 10 ng/g and those which have 100 to 150 ng/g U. In one sample accessory hydrous phases, phlogopite and hornblende, contain 130 and 300 ng/g U, respectively. The contact between the inclusion and the host basalt is usually quite sharp. Glassy or microcrystalline veinlets found in some samples contain more than 1 microgram/g. Very little uranium is associated with microcrystals of apatite. These results agree with some earlier investigators, who have concluded that suboceanic peridotites contain too little uranium to account for normal oceanic heat flow by conduction alone.
Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M
2008-06-01
Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.
Unraveling hadron structure with generalized parton distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrei Belitsky; Anatoly Radyushkin
2004-10-01
The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling andmore » QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.« less
Optimal phase estimation with arbitrary a priori knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal
2011-06-15
The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less
WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING
Saegusa, Takumi; Wellner, Jon A.
2013-01-01
We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559
Twenty-five years of change in southern African passerine diversity: nonclimatic factors of change.
Péron, Guillaume; Altwegg, Res
2015-09-01
We analysed more than 25 years of change in passerine bird distribution in South Africa, Swaziland and Lesotho, to show that species distributions can be influenced by processes that are at least in part independent of the local strength and direction of climate change: land use and ecological succession. We used occupancy models that separate species' detection from species' occupancy probability, fitted to citizen science data from both phases of the Southern African Bird Atlas Project (1987-1996 and 2007-2013). Temporal trends in species' occupancy probability were interpreted in terms of local extinction/colonization, and temporal trends in detection probability were interpreted in terms of change in abundance. We found for the first time at this scale that, as predicted in the context of bush encroachment, closed-savannah specialists increased where open-savannah specialists decreased. In addition, the trend in the abundance of species a priori thought to be favoured by agricultural conversion was negatively correlated with human population density, which is in line with hypotheses explaining the decline in farmland birds in the Northern Hemisphere. In addition to climate, vegetation cover and the intensity and time since agricultural conversion constitute important predictors of biodiversity changes in the region. Their inclusion will improve the reliability of predictive models of species distribution. © 2015 John Wiley & Sons Ltd.
Brommer, Jon E; Pietiäinen, Hannu; Kokko, Hanna
2002-01-01
Plastic life-history traits can be viewed as adaptive responses to environmental conditions, described by a reaction norm. In birds, the decline in clutch size with advancing laying date has been viewed as a reaction norm in response to the parent's own (somatic or local environmental) condition and the seasonal decline in its offspring's reproductive value. Theory predicts that differences in the seasonal recruitment are mirrored in the seasonal decrease in clutch size. We tested this prediction in the Ural owl. The owl's main prey, voles, show a cycle of low, increase and peak phases. Recruitment probability had a humped distribution in both increase and peak phases. Average recruitment probability was two to three times higher in the increase phase and declined faster in the latter part of the season when compared with the peak phase. Clutch size decreased twice as steep in the peak (0.1 eggs day-1) as in the increase phase (0.05 eggs day-1). This result appears to refute theoretical predictions of seasonal clutch size declines. However, a re-examination of current theory shows that the predictions of modelling are less robust to details of seasonal condition accumulation in birds than originally thought. The observed pattern can be predicted, assuming specifically shaped seasonal increases in condition across individuals. PMID:11916482
Yeh, Jun-Jun; Neoh, Choo-Aun; Chen, Cheng-Ren; Chou, Christine Yi-Ting; Wu, Ming-Ting
2014-01-01
This study evaluated the use of high-resolution computed tomography (HRCT) to predict the presence of culture-positive pulmonary tuberculosis (PTB) in adult patients with pulmonary lesions in the emergency department (ED). The study included a derivation phase and validation phase with a total of 8,245 patients with pulmonary disease. There were 132 patients with culture-positive PTB in the derivation phase and 147 patients with culture-positive PTB in the validation phase. Imaging evaluation of pulmonary lesions included morphology and segmental distribution. The post-test probability ratios between both phases in three prevalence areas were analyzed. In the derivation phase, a multivariate analysis model identified cavitation, consolidation, and clusters/nodules in right or left upper lobe (except anterior segment) and consolidation of the superior segment of the right or left lower lobe as independent positive factors for culture-positive PTB, while consolidation of the right or left lower lobe (except superior segment) were independent negative factors. An ideal cutoff point based on the receiver operating characteristic (ROC) curve analysis was obtained at a score of 1. The sensitivity, specificity, positivity predictive value, and negative predictive value from derivation phase were 98.5% (130/132), 99.7% (3997/4008), 92.2% (130/141), and 99.9% (3997/3999). Based on the predicted positive likelihood ratio value of 328.33 in derivation phase, the post-test probability was observed to be 91.5% in the derivation phase, 92.5% in the validation phase, 94.5% in a high TB prevalence area, 91.0% in a moderate prevalence area, and 76.8% in moderate-to-low prevalence area. Our model using HRCT, which is feasible to perform in the ED, can promptly diagnose culture-positive PTB in moderate and moderate-to-low prevalence areas.
Quantum mechanics on phase space and the Coulomb potential
NASA Astrophysics Data System (ADS)
Campos, P.; Martins, M. G. R.; Vianna, J. D. M.
2017-04-01
Symplectic quantum mechanics (SMQ) makes possible to derive the Wigner function without the use of the Liouville-von Neumann equation. In this formulation of the quantum theory the Galilei Lie algebra is constructed using the Weyl (or star) product with Q ˆ = q ⋆ = q +iħ/2∂p , P ˆ = p ⋆ = p -iħ/2∂q, and the Schrödinger equation is rewritten in phase space; in consequence physical applications involving the Coulomb potential present some specific difficulties. Within this context, in order to treat the Schrödinger equation in phase space, a procedure based on the Levi-Civita (or Bohlin) transformation is presented and applied to two-dimensional (2D) hydrogen atom. Amplitudes of probability in phase space and the correspondent Wigner quasi-distribution functions are derived and discussed.
Extreme wave formation in unidirectional sea due to stochastic wave phase dynamics
NASA Astrophysics Data System (ADS)
Wang, Rui; Balachandran, Balakumar
2018-07-01
The authors consider a stochastic model based on the interaction and phase coupling amongst wave components that are modified envelope soliton solutions to the nonlinear Schrödinger equation. A probabilistic study is carried out and the resulting findings are compared with ocean wave field observations and laboratory experimental results. The wave height probability distribution obtained from the model is found to match well with prior data in the large wave height region. From the eigenvalue spectrum obtained through the Inverse Scattering Transform, it is revealed that the deep-water wave groups move at a speed different from the linear group speed, which justifies the inclusion of phase correction to the envelope solitary wave components. It is determined that phase synchronization amongst elementary solitary wave components can be critical for the formation of extreme waves in unidirectional sea states.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
Self-narrowing of size distributions of nanostructures by nucleation antibunching
NASA Astrophysics Data System (ADS)
Glas, Frank; Dubrovskii, Vladimir G.
2017-08-01
We study theoretically the size distributions of ensembles of nanostructures fed from a nanosize mother phase or a nanocatalyst that contains a limited number of the growth species that form each nanostructure. In such systems, the nucleation probability decreases exponentially after each nucleation event, leading to the so-called nucleation antibunching. Specifically, this effect has been observed in individual nanowires grown in the vapor-liquid-solid mode and greatly affects their properties. By performing numerical simulations over large ensembles of nanostructures as well as developing two different analytical schemes (a discrete and a continuum approach), we show that nucleation antibunching completely suppresses fluctuation-induced broadening of the size distribution. As a result, the variance of the distribution saturates to a time-independent value instead of growing infinitely with time. The size distribution widths and shapes primarily depend on the two parameters describing the degree of antibunching and the nucleation delay required to initiate the growth. The resulting sub-Poissonian distributions are highly desirable for improving size homogeneity of nanowires. On a more general level, this unique self-narrowing effect is expected whenever the growth rate is regulated by a nanophase which is able to nucleate an island much faster than it is refilled from a surrounding macroscopic phase.
A Bayesian-frequentist two-stage single-arm phase II clinical trial design.
Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen
2012-08-30
It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.
Baity-Jesi, Marco; Calore, Enrico; Cruz, Andres; Fernandez, Luis Antonio; Gil-Narvión, José Miguel; Gordillo-Guerrero, Antonio; Iñiguez, David; Maiorano, Andrea; Marinari, Enzo; Martin-Mayor, Victor; Monforte-Garcia, Jorge; Muñoz Sudupe, Antonio; Navarro, Denis; Parisi, Giorgio; Perez-Gaviro, Sergio; Ricci-Tersenghi, Federico; Ruiz-Lorenzo, Juan Jesus; Schifano, Sebastiano Fabio; Tarancón, Alfonso; Tripiccione, Raffaele; Yllanes, David
2017-01-01
We have performed a very accurate computation of the nonequilibrium fluctuation–dissipation ratio for the 3D Edwards–Anderson Ising spin glass, by means of large-scale simulations on the special-purpose computers Janus and Janus II. This ratio (computed for finite times on very large, effectively infinite, systems) is compared with the equilibrium probability distribution of the spin overlap for finite sizes. Our main result is a quantitative statics-dynamics dictionary, which could allow the experimental exploration of important features of the spin-glass phase without requiring uncontrollable extrapolations to infinite times or system sizes. PMID:28174274
NASA Astrophysics Data System (ADS)
Kagoshima, Yasushi; Miyagawa, Takamasa; Kagawa, Saki; Takeda, Shingo; Takano, Hidekazu
2017-08-01
The intensity distribution in phase space of an X-ray synchrotron radiation beamline was measured using a pinhole camera method, in order to verify astigmatism compensation by a Fresnel zone plate focusing optical system. The beamline is equipped with a silicon double crystal monochromator. The beam size and divergence at an arbitrary distance were estimated. It was found that the virtual source point was largely different between the vertical and horizontal directions, which is probably caused by thermal distortion of the monochromator crystal. The result is consistent with our astigmatism compensation by inclining a Fresnel zone plate.
Repelling, binding, and oscillating of two-particle discrete-time quantum walks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qinghao; Li, Zhi-Jian, E-mail: zjli@sxu.edu.cn
In this paper, we investigate the effects of particle–particle interaction and static force on the propagation of probability distribution in two-particle discrete-time quantum walk, where the interaction and static force are expressed as a collision phase and a linear position-dependent phase, respectively. It is found that the interaction can lead to boson repelling and fermion binding. The static force also induces Bloch oscillation and results in a continuous transition from boson bunching to fermion anti-bunching. The interplays of particle–particle interaction, quantum interference, and Bloch oscillation provide a versatile framework to study and simulate many-particle physics via quantum walks.
NASA Astrophysics Data System (ADS)
Margolin, L. G.
2018-04-01
The applicability of Navier-Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman-Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics. This article is part of the theme issue `Hilbert's sixth problem'.
Efficient entanglement distillation without quantum memory.
Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J; Fiurášek, Jaromír; Schnabel, Roman
2016-05-31
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.
Efficient entanglement distillation without quantum memory
Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J.; Fiurášek, Jaromír; Schnabel, Roman
2016-01-01
Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution. PMID:27241946
Large-deviation properties of Brownian motion with dry friction.
Chen, Yaming; Just, Wolfram
2014-10-01
We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.
Independence and totalness of subspaces in phase space methods
NASA Astrophysics Data System (ADS)
Vourdas, A.
2018-04-01
The concepts of independence and totalness of subspaces are introduced in the context of quasi-probability distributions in phase space, for quantum systems with finite-dimensional Hilbert space. It is shown that due to the non-distributivity of the lattice of subspaces, there are various levels of independence, from pairwise independence up to (full) independence. Pairwise totalness, totalness and other intermediate concepts are also introduced, which roughly express that the subspaces overlap strongly among themselves, and they cover the full Hilbert space. A duality between independence and totalness, that involves orthocomplementation (logical NOT operation), is discussed. Another approach to independence is also studied, using Rota's formalism on independent partitions of the Hilbert space. This is used to define informational independence, which is proved to be equivalent to independence. As an application, the pentagram (used in discussions on contextuality) is analysed using these concepts.
2007-05-29
International Conference Acoustics Speech and Signal Processing (ICASSP 2007) conference 15 − 20 April 2007 in Honolulu, Hawaii. 1. E. Near Term...from the sensor measured in feet. The detection performance of the footstep in the presence of interfering speech was characterized in previously...investigation, we developed a simple piecewise linear approximation to the probability of detection curve with no interfering speech . This approximation was
Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain
NASA Astrophysics Data System (ADS)
Žnidarič, Marko
2014-01-01
We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.
Mode switching in volcanic seismicity: El Hierro 2011-2013
NASA Astrophysics Data System (ADS)
Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.
2016-05-01
The Gutenberg-Richter b value is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. Such studies typically analyze discrete time windows or phases, but the choice of such windows is subjective and can introduce significant bias. Here we minimize this sample bias by iteratively sampling catalogs with randomly chosen windows and then stack the resulting probability density functions for the estimated b>˜ value to determine a net probability density function. We examine data from the El Hierro seismic catalog during a period of unrest in 2011-2013 and demonstrate clear multimodal behavior. Individual modes are relatively stable in time, but the most probable b>˜ value intermittently switches between modes, one of which is similar to that of tectonic seismicity. Multimodality is primarily associated with intermittent activation and cessation of activity in different parts of the volcanic system rather than with respect to any systematic inferred underlying process.
Loss Reduction on Adoption of High Voltage LT Less Distribution
NASA Astrophysics Data System (ADS)
Tiwari, Deepika; Adhikari, Nikhileshwar Prasad; Gupta, Amit; Bajpai, Santosh Kumar
2016-06-01
In India there is a need to improve the quality of the electricity distribution process which has increased varying from year to year. In distribution networks, the limiting factor to load carrying capacity is generally the voltage reduction. High voltage distribution system (HVDS) is one of the steps to reduce line losses in electrical distribution network. It helps to reduce the length of low tension (LT) lines and makes the power available close to the users. The high voltage power distribution system reduces the probability of power theft by hooking HVDS suggests an increase in installation of small capacity single-phase transformers in the network which again save considerable energy. This paper is compared to existing conventional low tension distribution network with HVDS. The paper gives a clear picture of reduction in distribution losses with adoption of HVDS system. Losses Reduction of 11 kV Feeder in Nuniya (India) with adoption of HVDS have been worked out/ quantified and benefits thereby in generating capacity have discussed.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kauweloa, K; Gutierrez, A; Bergamo, A
Purpose: There is growing interest about biological effective dose (BED) and its application in treatment plan evaluation due to its stronger correlation with treatment outcome. An approximate biological effective dose (BEDA) equation was introduced to simplify BED calculations by treatment planning systems in multi-phase treatments. The purpose of this work is to reveal its mathematical properties relative to the true, multi-phase BED (BEDT) equation. Methods: The BEDT equation was derived and used to reveal the mathematical properties of BEDA. MATLAB (MathWorks, Natick, MA) was used to simulate and analyze common and extreme clinical multi-phase cases. In those cases, percent errormore » (Perror) and Bland-Altman analysis were used to study the significance of the inaccuracies of BEDA for different combinations of total doses, numbers of fractions, doses per fractions and α over β values. All the calculations were performed on a voxel-basis in order to study how dose distributions would affect the accuracy of BEDA. Results: When the voxel dose-per-fractions (DPF) delivered by both phases are equal, BEDA and BEDT are equal. In heterogeneous dose distributions, which significantly vary between the phases, there are fewer occurrences of equal DPFs and hence the imprecision of BEDA is greater. It was shown that as the α over β ratio increased the accuracy of BEDA would improve. Examining twenty-four cases, it was shown that the range of DPF ratios for a 3 Perror varied from 0.32 to 7.50Gy, whereas for Perror of 1 the range varied from 0.50 to 2.96Gy. Conclusion: The DPF between the different phases should be equal in order to render BEDA accurate. OARs typically receive heterogeneous dose distributions hence the probability of equal DPFs is low. Consequently, the BEDA equation should only be used for targets or OARs that receive uniform or very similar dose distributions by the different treatment phases.« less
NASA Astrophysics Data System (ADS)
Puy, Martin; Vialard, J.; Lengaigne, M.; Guilyardi, E.
2016-04-01
Synoptic wind events in the equatorial Pacific strongly influence the El Niño/Southern Oscillation (ENSO) evolution. This paper characterizes the spatio-temporal distribution of Easterly (EWEs) and Westerly Wind Events (WWEs) and quantifies their relationship with intraseasonal and interannual large-scale climate variability. We unambiguously demonstrate that the Madden-Julian Oscillation (MJO) and Convectively-coupled Rossby Waves (CRW) modulate both WWEs and EWEs occurrence probability. 86 % of WWEs occur within convective MJO and/or CRW phases and 83 % of EWEs occur within the suppressed phase of MJO and/or CRW. 41 % of WWEs and 26 % of EWEs are in particular associated with the combined occurrence of a CRW/MJO, far more than what would be expected from a random distribution (3 %). Wind events embedded within MJO phases also have a stronger impact on the ocean, due to a tendency to have a larger amplitude, zonal extent and longer duration. These findings are robust irrespective of the wind events and MJO/CRW detection methods. While WWEs and EWEs behave rather symmetrically with respect to MJO/CRW activity, the impact of ENSO on wind events is asymmetrical. The WWEs occurrence probability indeed increases when the warm pool is displaced eastward during El Niño events, an increase that can partly be related to interannual modulation of the MJO/CRW activity in the western Pacific. On the other hand, the EWEs modulation by ENSO is less robust, and strongly depends on the wind event detection method. The consequences of these results for ENSO predictability are discussed.
NASA Astrophysics Data System (ADS)
Regnier, David; Lacroix, Denis; Scamps, Guillaume; Hashimoto, Yukio
2018-03-01
In a mean-field description of superfluidity, particle number and gauge angle are treated as quasiclassical conjugated variables. This level of description was recently used to describe nuclear reactions around the Coulomb barrier. Important effects of the relative gauge angle between two identical superfluid nuclei (symmetric collisions) on transfer probabilities and fusion barrier have been uncovered. A theory making contact with experiments should at least average over different initial relative gauge-angles. In the present work, we propose a new approach to obtain the multiple pair transfer probabilities between superfluid systems. This method, called phase-space combinatorial (PSC) technique, relies both on phase-space averaging and combinatorial arguments to infer the full pair transfer probability distribution at the cost of multiple mean-field calculations only. After benchmarking this approach in a schematic model, we apply it to the collision 20O+20O at various energies below the Coulomb barrier. The predictions for one pair transfer are similar to results obtained with an approximated projection method, whereas significant differences are found for two pairs transfer. Finally, we investigated the applicability of the PSC method to the contact between nonidentical superfluid systems. A generalization of the method is proposed and applied to the schematic model showing that the pair transfer probabilities are reasonably reproduced. The applicability of the PSC method to asymmetric nuclear collisions is investigated for the 14O+20O collision and it turns out that unrealistically small single- and multiple pair transfer probabilities are obtained. This is explained by the fact that relative gauge angle play in this case a minor role in the particle transfer process compared to other mechanisms, such as equilibration of the charge/mass ratio. We conclude that the best ground for probing gauge-angle effects in nuclear reaction and/or for applying the proposed PSC approach on pair transfer is the collisions of identical open-shell spherical nuclei.
Poincaré recurrence statistics as an indicator of chaos synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boev, Yaroslav I., E-mail: boev.yaroslav@gmail.com; Vadivasova, Tatiana E., E-mail: vadivasovate@yandex.ru; Anishchenko, Vadim S., E-mail: wadim@info.sgu.ru
The dynamics of the autonomous and non-autonomous Rössler system is studied using the Poincaré recurrence time statistics. It is shown that the probability distribution density of Poincaré recurrences represents a set of equidistant peaks with the distance that is equal to the oscillation period and the envelope obeys an exponential distribution. The dimension of the spatially uniform Rössler attractor is estimated using Poincaré recurrence times. The mean Poincaré recurrence time in the non-autonomous Rössler system is locked by the external frequency, and this enables us to detect the effect of phase-frequency synchronization.
Probability distributions of linear statistics in chaotic cavities and associated phase transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vivo, Pierpaolo; Majumdar, Satya N.; Bohigas, Oriol
2010-03-01
We establish large deviation formulas for linear statistics on the N transmission eigenvalues (T{sub i}) of a chaotic cavity, in the framework of random matrix theory. Given any linear statistics of interest A=SIGMA{sub i=1}{sup N}a(T{sub i}), the probability distribution P{sub A}(A,N) of A generically satisfies the large deviation formula lim{sub N-}>{sub i}nfinity[-2 log P{sub A}(Nx,N)/betaN{sup 2}]=PSI{sub A}(x), where PSI{sub A}(x) is a rate function that we compute explicitly in many cases (conductance, shot noise, and moments) and beta corresponds to different symmetry classes. Using these large deviation expressions, it is possible to recover easily known results and to produce newmore » formulas, such as a closed form expression for v(n)=lim{sub N-}>{sub i}nfinity var(T{sub n}) (where T{sub n}=SIGMA{sub i}T{sub i}{sup n}) for arbitrary integer n. The universal limit v*=lim{sub n-}>{sub i}nfinity v(n)=1/2pibeta is also computed exactly. The distributions display a central Gaussian region flanked on both sides by non-Gaussian tails. At the junction of the two regimes, weakly nonanalytical points appear, a direct consequence of phase transitions in an associated Coulomb gas problem. Numerical checks are also provided, which are in full agreement with our asymptotic results in both real and Laplace space even for moderately small N. Part of the results have been announced by Vivo et al. [Phys. Rev. Lett. 101, 216809 (2008)].« less
Quantum temporal probabilities in tunneling systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastopoulos, Charis, E-mail: anastop@physics.upatras.gr; Savvidou, Ntina, E-mail: ksavvidou@physics.upatras.gr
We study the temporal aspects of quantum tunneling as manifested in time-of-arrival experiments in which the detected particle tunnels through a potential barrier. In particular, we present a general method for constructing temporal probabilities in tunneling systems that (i) defines ‘classical’ time observables for quantum systems and (ii) applies to relativistic particles interacting through quantum fields. We show that the relevant probabilities are defined in terms of specific correlation functions of the quantum field associated with tunneling particles. We construct a probability distribution with respect to the time of particle detection that contains all information about the temporal aspects ofmore » the tunneling process. In specific cases, this probability distribution leads to the definition of a delay time that, for parity-symmetric potentials, reduces to the phase time of Bohm and Wigner. We apply our results to piecewise constant potentials, by deriving the appropriate junction conditions on the points of discontinuity. For the double square potential, in particular, we demonstrate the existence of (at least) two physically relevant time parameters, the delay time and a decay rate that describes the escape of particles trapped in the inter-barrier region. Finally, we propose a resolution to the paradox of apparent superluminal velocities for tunneling particles. We demonstrate that the idea of faster-than-light speeds in tunneling follows from an inadmissible use of classical reasoning in the description of quantum systems. -- Highlights: •Present a general methodology for deriving temporal probabilities in tunneling systems. •Treatment applies to relativistic particles interacting through quantum fields. •Derive a new expression for tunneling time. •Identify new time parameters relevant to tunneling. •Propose a resolution of the superluminality paradox in tunneling.« less
NASA Astrophysics Data System (ADS)
Yuan, Di; Tian, Jun-Long; Lin, Fang; Ma, Dong-Wei; Zhang, Jing; Cui, Hai-Tao; Xiao, Yi
2018-06-01
In this study we investigate the collective behavior of the generalized Kuramoto model with an external pinning force in which oscillators with positive and negative coupling strengths are conformists and contrarians, respectively. We focus on a situation in which the natural frequencies of the oscillators follow a uniform probability density. By numerically simulating the model, it is shown that the model supports multistable synchronized states such as a traveling wave state, π state and periodic synchronous state: an oscillating π state. The oscillating π state may be characterized by the phase distribution oscillating in a confined region and the phase difference between conformists and contrarians oscillating around π periodically. In addition, we present the parameter space of the oscillating π state and traveling wave state of the model.
Diffusion and Localization of Relative Strategy Scores in The Minority Game
NASA Astrophysics Data System (ADS)
Granath, Mats; Perez-Diaz, Alvaro
2016-10-01
We study the equilibrium distribution of relative strategy scores of agents in the asymmetric phase (α ≡ P/N≳ 1) of the basic Minority Game using sign-payoff, with N agents holding two strategies over P histories. We formulate a statistical model that makes use of the gauge freedom with respect to the ordering of an agent's strategies to quantify the correlation between the attendance and the distribution of strategies. The relative score xin Z of the two strategies of an agent is described in terms of a one dimensional random walk with asymmetric jump probabilities, leading either to a static and asymmetric exponential distribution centered at x=0 for fickle agents or to diffusion with a positive or negative drift for frozen agents. In terms of scaled coordinates x/√{N} and t / N the distributions are uniquely given by α and in quantitative agreement with direct simulations of the game. As the model avoids the reformulation in terms of a constrained minimization problem it can be used for arbitrary payoff functions with little calculational effort and provides a transparent and simple formulation of the dynamics of the basic Minority Game in the asymmetric phase.
The orbital PDF: general inference of the gravitational potential from steady-state tracers
NASA Astrophysics Data System (ADS)
Han, Jiaxin; Wang, Wenting; Cole, Shaun; Frenk, Carlos S.
2016-02-01
We develop two general methods to infer the gravitational potential of a system using steady-state tracers, I.e. tracers with a time-independent phase-space distribution. Combined with the phase-space continuity equation, the time independence implies a universal orbital probability density function (oPDF) dP(λ|orbit) ∝ dt, where λ is the coordinate of the particle along the orbit. The oPDF is equivalent to Jeans theorem, and is the key physical ingredient behind most dynamical modelling of steady-state tracers. In the case of a spherical potential, we develop a likelihood estimator that fits analytical potentials to the system and a non-parametric method (`phase-mark') that reconstructs the potential profile, both assuming only the oPDF. The methods involve no extra assumptions about the tracer distribution function and can be applied to tracers with any arbitrary distribution of orbits, with possible extension to non-spherical potentials. The methods are tested on Monte Carlo samples of steady-state tracers in dark matter haloes to show that they are unbiased as well as efficient. A fully documented C/PYTHON code implementing our method is freely available at a GitHub repository linked from http://icc.dur.ac.uk/data/#oPDF.
NASA Technical Reports Server (NTRS)
Wu, H.; Durante, M.; Lucas, J. N.
2001-01-01
PURPOSE: To study the effect of the interaction distance on the frequency of inter- and intrachromosome exchanges in individual chromosomes with respect to their DNA content. Assumptions: Chromosome exchanges are formed by misrejoining of two DNA double-strand breaks (DSB) induced within an interaction distance, d. It is assumed that chromosomes in G(0)/G(1) phase of the cell cycle occupy a spherical domain in a cell nucleus, with no spatial overlap between individual chromosome domains. RESULTS: Formulae are derived for the probability of formation of inter-, as well as intra-, chromosome exchanges relating to the DNA content of the chromosome for a given interaction distance. For interaction distances <1 microm, the relative frequency of interchromosome exchanges predicted by the present model is similar to that by Cigarran et al. (1998) based on the assumption that the probability of interchromosome exchanges is proportional to the "surface area" of the chromosome territory. The "surface area" assumption is shown to be a limiting case of d-->0 in the present model. The present model also predicts that the probability of intrachromosome exchanges occurring in individual chromosomes is proportional to their DNA content with correction terms. CONCLUSION: When the interaction distance is small, the "surface area" distribution for chromosome participation in interchromosome exchanges has been expected. However, the present model shows that for the interaction distance as large as 1 microm, the predicted probability of interchromosome exchange formation is still close to the surface area distribution. Therefore, this distribution does not necessarily rule out the formation of complex chromosomal aberrations by long-range misrejoining of DSB.
Kato, Tomohiko; Saita, Takahiro
2011-03-16
The magnetism of Pd(1-x)Mn(x) is investigated theoretically. A localized spin model for Mn spins that interact with short-range antiferromagnetic interactions and long-range ferromagnetic interactions via itinerant d electrons is set up, with no adjustable parameters. A multicanonical Monte Carlo simulation, combined with a procedure of symmetry breaking, is employed to discriminate between the ferromagnetic and spin glass orders. The transition temperature and the low-temperature phase are determined from the temperature variation of the specific heat and the probability distributions of the ferromagnetic order parameter and the spin glass order parameter at different concentrations. The calculation results reveal that only the ferromagnetic phase exists at x < 0.02, that only the spin glass phase exists at x > 0.04, and that the two phases coexist at intermediate concentrations. This result agrees semi-quantitatively with experimental results.
NASA Technical Reports Server (NTRS)
Wang, C.-W.; Stark, W.
2005-01-01
This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.
Fully synchronous solutions and the synchronization phase transition for the finite-N Kuramoto model
NASA Astrophysics Data System (ADS)
Bronski, Jared C.; DeVille, Lee; Jip Park, Moon
2012-09-01
We present a detailed analysis of the stability of phase-locked solutions to the Kuramoto system of oscillators. We derive an analytical expression counting the dimension of the unstable manifold associated to a given stationary solution. From this we are able to derive a number of consequences, including analytic expressions for the first and last frequency vectors to phase-lock, upper and lower bounds on the probability that a randomly chosen frequency vector will phase-lock, and very sharp results on the large N limit of this model. One of the surprises in this calculation is that for frequencies that are Gaussian distributed, the correct scaling for full synchrony is not the one commonly studied in the literature; rather, there is a logarithmic correction to the scaling which is related to the extremal value statistics of the random frequency vector.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
On the role of dealing with quantum coherence in amplitude amplification
NASA Astrophysics Data System (ADS)
Rastegin, Alexey E.
2018-07-01
Amplitude amplification is one of primary tools in building algorithms for quantum computers. This technique generalizes key ideas of the Grover search algorithm. Potentially useful modifications are connected with changing phases in the rotation operations and replacing the intermediate Hadamard transform with arbitrary unitary one. In addition, arbitrary initial distribution of the amplitudes may be prepared. We examine trade-off relations between measures of quantum coherence and the success probability in amplitude amplification processes. As measures of coherence, the geometric coherence and the relative entropy of coherence are considered. In terms of the relative entropy of coherence, complementarity relations with the success probability seem to be the most expository. The general relations presented are illustrated within several model scenarios of amplitude amplification processes.
Granular Segregation Driven by Particle Interactions
NASA Astrophysics Data System (ADS)
Lozano, C.; Zuriguel, I.; Garcimartín, A.; Mullin, T.
2015-05-01
We report the results of an experimental study of particle-particle interactions in a horizontally shaken granular layer that undergoes a second order phase transition from a binary gas to a segregation liquid as the packing fraction C is increased. By focusing on the behavior of individual particles, the effect of C is studied on (1) the process of cluster formation, (2) cluster dynamics, and (3) cluster destruction. The outcomes indicate that the segregation is driven by two mechanisms: attraction between particles with the same properties and random motion with a characteristic length that is inversely proportional to C . All clusters investigated are found to be transient and the probability distribution functions of the separation times display a power law tail, indicating that the splitting probability decreases with time.
Growing and navigating the small world Web by local content
Menczer, Filippo
2002-01-01
Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues. PMID:12381792
Growing and navigating the small world Web by local content
NASA Astrophysics Data System (ADS)
Menczer, Filippo
2002-10-01
Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.
Growing and navigating the small world Web by local content.
Menczer, Filippo
2002-10-29
Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.
The effects of distributed life cycles on the dynamics of viral infections.
Campos, Daniel; Méndez, Vicenç; Fedotov, Sergei
2008-09-21
We explore the role of cellular life cycles for viruses and host cells in an infection process. For this purpose, we derive a generalized version of the basic model of virus dynamics (Nowak, M.A., Bangham, C.R.M., 1996. Population dynamics of immune responses to persistent viruses. Science 272, 74-79) from a mesoscopic description. In its final form the model can be written as a set of Volterra integrodifferential equations. We consider the role of distributed lifespans and a intracellular (eclipse) phase. These processes are implemented by means of probability distribution functions. The basic reproductive ratio R(0) of the infection is properly defined in terms of such distributions by using an analysis of the equilibrium states and their stability. It is concluded that the introduction of distributed delays can strongly modify both the value of R(0) and the predictions for the virus loads, so the effects on the infection dynamics are of major importance. We also show how the model presented here can be applied to some simple situations where direct comparison with experiments is possible. Specifically, phage-bacteria interactions are analyzed. The dynamics of the eclipse phase for phages is characterized analytically, which allows us to compare the performance of three different fittings proposed before for the one-step growth curve.
NASA Astrophysics Data System (ADS)
Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi
To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.
Garriguet, Didier
2016-04-01
Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.
NASA Astrophysics Data System (ADS)
Shemer, L.; Sergeeva, A.
2009-12-01
The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.
Strategies and trajectories of coral reef fish larvae optimizing self-recruitment.
Irisson, Jean-Olivier; LeVan, Anselme; De Lara, Michel; Planes, Serge
2004-03-21
Like many marine organisms, most coral reef fishes have a dispersive larval phase. The fate of this phase is of great concern for their ecology as it may determine population demography and connectivity. As direct study of the larval phase is difficult, we tackle the question of dispersion from an opposite point of view and study self-recruitment. In this paper, we propose a mathematical model of the pelagic phase, parameterized by a limited number of factors (currents, predator and prey distributions, energy budgets) and which focuses on the behavioral response of the larvae to these factors. We evaluate optimal behavioral strategies of the larvae (i.e. strategies that maximize the probability of return to the natal reef) and examine the trajectories of dispersal that they induce. Mathematically, larval behavior is described by a controlled Markov process. A strategy induces a sequence, indexed by time steps, of "decisions" (e.g. looking for food, swimming in a given direction). Biological, physical and topographic constraints are captured through the transition probabilities and the sets of possible decisions. Optimal strategies are found by means of the so-called stochastic dynamic programming equation. A computer program is developed and optimal decisions and trajectories are numerically derived. We conclude that this technique can be considered as a good tool to represent plausible larval behaviors and that it has great potential in terms of theoretical investigations and also for field applications.
Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang
2014-08-25
The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.
Almost conserved operators in nearly many-body localized systems
NASA Astrophysics Data System (ADS)
Pancotti, Nicola; Knap, Michael; Huse, David A.; Cirac, J. Ignacio; Bañuls, Mari Carmen
2018-03-01
We construct almost conserved local operators, that possess a minimal commutator with the Hamiltonian of the system, near the many-body localization transition of a one-dimensional disordered spin chain. We collect statistics of these slow operators for different support sizes and disorder strengths, both using exact diagonalization and tensor networks. Our results show that the scaling of the average of the smallest commutators with the support size is sensitive to Griffiths effects in the thermal phase and the onset of many-body localization. Furthermore, we demonstrate that the probability distributions of the commutators can be analyzed using extreme value theory and that their tails reveal the difference between diffusive and subdiffusive dynamics in the thermal phase.
NASA Astrophysics Data System (ADS)
Inada, Yuki; Kamiya, Tomoki; Matsuoka, Shigeyasu; Kumada, Akiko; Ikeda, Hisatoshi; Hidaka, Kunihiko
2018-01-01
Two-dimensional electron density imaging over free burning SF6 arcs and SF6 gas-blast arcs was conducted at current zero using highly sensitive Shack-Hartmann type laser wavefront sensors in order to experimentally characterise electron density distributions for the success and failure of arc interruption in the thermal reignition phase. The experimental results under an interruption probability of 50% showed that free burning SF6 arcs with axially asymmetric electron density profiles were interrupted with a success rate of 88%. On the other hand, the current interruption of SF6 gas-blast arcs was reproducibly achieved under locally reduced electron densities and the interruption success rate was 100%.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
On the SIMS Ionization Probability of Organic Molecules.
Popczun, Nicholas J; Breuer, Lars; Wucher, Andreas; Winograd, Nicholas
2017-06-01
The prospect of improved secondary ion yields for secondary ion mass spectrometry (SIMS) experiments drives innovation of new primary ion sources, instrumentation, and post-ionization techniques. The largest factor affecting secondary ion efficiency is believed to be the poor ionization probability (α + ) of sputtered material, a value rarely measured directly, but estimated to be in some cases as low as 10 -5 . Our lab has developed a method for the direct determination of α + in a SIMS experiment using laser post-ionization (LPI) to detect neutral molecular species in the sputtered plume for an organic compound. Here, we apply this method to coronene (C 24 H 12 ), a polyaromatic hydrocarbon that exhibits strong molecular signal during gas-phase photoionization. A two-dimensional spatial distribution of sputtered neutral molecules is measured and presented. It is shown that the ionization probability of molecular coronene desorbed from a clean film under bombardment with 40 keV C 60 cluster projectiles is of the order of 10 -3 , with some remaining uncertainty arising from laser-induced fragmentation and possible differences in the emission velocity distributions of neutral and ionized molecules. In general, this work establishes a method to estimate the ionization efficiency of molecular species sputtered during a single bombardment event. Graphical Abstract .
NASA Astrophysics Data System (ADS)
Quinn, Kevin Martin
The total amount of precipitation integrated across a precipitation cluster (contiguous precipitating grid cells exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance, expressed as the rate of water mass lost or latent heat released, i.e. the power of the disturbance. Probability distributions of cluster power are examined during boreal summer (May-September) and winter (January-March) using satellite-retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) 3B42 and Special Sensor Microwave Imager and Sounder (SSM/I and SSMIS) programs, model output from the High Resolution Atmospheric Model (HIRAM, roughly 0.25-0.5 0 resolution), seven 1-2° resolution members of the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiment, and National Center for Atmospheric Research Large Ensemble (NCAR LENS). Spatial distributions of precipitation-weighted centroids are also investigated in observations (TRMM-3B42) and climate models during winter as a metric for changes in mid-latitude storm tracks. Observed probability distributions for both seasons are scale-free from the smallest clusters up to a cutoff scale at high cluster power, after which the probability density drops rapidly. When low rain rates are excluded by choosing a minimum rain rate threshold in defining clusters, the models accurately reproduce observed cluster power statistics and winter storm tracks. Changes in behavior in the tail of the distribution, above the cutoff, are important for impacts since these quantify the frequency of the most powerful storms. End-of-century cluster power distributions and storm track locations are investigated in these models under a "business as usual" global warming scenario. The probability of high cluster power events increases by end-of-century across all models, by up to an order of magnitude for the highest-power events for which statistics can be computed. For the three models in the suite with continuous time series of high resolution output, there is substantial variability on when these probability increases for the most powerful precipitation clusters become detectable, ranging from detectable within the observational period to statistically significant trends emerging only after 2050. A similar analysis of National Centers for Environmental Prediction (NCEP) Reanalysis 2 and SSM/I-SSMIS rain rate retrievals in the recent observational record does not yield reliable evidence of trends in high-power cluster probabilities at this time. Large impacts to mid-latitude storm tracks are projected over the West Coast and eastern North America, with no less than 8 of the 9 models examined showing large increases by end-of-century in the probability density of the most powerful storms, ranging up to a factor of 6.5 in the highest range bin for which historical statistics are computed. However, within these regional domains, there is considerable variation among models in pinpointing exactly where the largest increases will occur.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2016-01-01
During inactive phases of Madden-Julian oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES observations for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation directions/speeds.
Memory-induced resonancelike suppression of spike generation in a resonate-and-fire neuron model
NASA Astrophysics Data System (ADS)
Mankin, Romi; Paekivi, Sander
2018-01-01
The behavior of a stochastic resonate-and-fire neuron model based on a reduction of a fractional noise-driven generalized Langevin equation (GLE) with a power-law memory kernel is considered. The effect of temporally correlated random activity of synaptic inputs, which arise from other neurons forming local and distant networks, is modeled as an additive fractional Gaussian noise in the GLE. Using a first-passage-time formulation, in certain system parameter domains exact expressions for the output interspike interval (ISI) density and for the survival probability (the probability that a spike is not generated) are derived and their dependence on input parameters, especially on the memory exponent, is analyzed. In the case of external white noise, it is shown that at intermediate values of the memory exponent the survival probability is significantly enhanced in comparison with the cases of strong and weak memory, which causes a resonancelike suppression of the probability of spike generation as a function of the memory exponent. Moreover, an examination of the dependence of multimodality in the ISI distribution on input parameters shows that there exists a critical memory exponent αc≈0.402 , which marks a dynamical transition in the behavior of the system. That phenomenon is illustrated by a phase diagram describing the emergence of three qualitatively different structures of the ISI distribution. Similarities and differences between the behavior of the model at internal and external noises are also discussed.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
A hierarchical two-phase framework for selecting genes in cancer datasets with a neuro-fuzzy system.
Lim, Jongwoo; Wang, Bohyun; Lim, Joon S
2016-04-29
Finding the minimum number of appropriate biomarkers for specific targets such as a lung cancer has been a challenging issue in bioinformatics. We propose a hierarchical two-phase framework for selecting appropriate biomarkers that extracts candidate biomarkers from the cancer microarray datasets and then selects the minimum number of appropriate biomarkers from the extracted candidate biomarkers datasets with a specific neuro-fuzzy algorithm, which is called a neural network with weighted fuzzy membership function (NEWFM). In this context, as the first phase, the proposed framework is to extract candidate biomarkers by using a Bhattacharyya distance method that measures the similarity of two discrete probability distributions. Finally, the proposed framework is able to reduce the cost of finding biomarkers by not receiving medical supplements and improve the accuracy of the biomarkers in specific cancer target datasets.
A brief introduction to probability.
Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio
2018-02-01
The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.
Endogenous modulation of low frequency oscillations by temporal expectations
Cravo, Andre M.; Rohenkohl, Gustavo; Wyart, Valentin
2011-01-01
Recent studies have associated increasing temporal expectations with synchronization of higher frequency oscillations and suppression of lower frequencies. In this experiment, we explore a proposal that low-frequency oscillations provide a mechanism for regulating temporal expectations. We used a speeded Go/No-go task and manipulated temporal expectations by changing the probability of target presentation after certain intervals. Across two conditions, the temporal conditional probability of target events differed substantially at the first of three possible intervals. We found that reactions times differed significantly at this first interval across conditions, decreasing with higher temporal expectations. Interestingly, the power of theta activity (4–8 Hz), distributed over central midline sites, also differed significantly across conditions at this first interval. Furthermore, we found a transient coupling between theta phase and beta power after the first interval in the condition with high temporal expectation for targets at this time point. Our results suggest that the adjustments in theta power and the phase-power coupling between theta and beta contribute to a central mechanism for controlling neural excitability according to temporal expectations. PMID:21900508
NASA Technical Reports Server (NTRS)
Lambert, Winifred C.
2003-01-01
This report describes the results from Phase II of the AMU's Short-Range Statistical Forecasting task for peak winds at the Shuttle Landing Facility (SLF). The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The 45th Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A seven year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. A PC-based Graphical User Interface (GUI) tool was created to display the data quickly.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
A product of independent beta probabilities dose escalation design for dual-agent phase I trials.
Mander, Adrian P; Sweeting, Michael J
2015-04-15
Dual-agent trials are now increasingly common in oncology research, and many proposed dose-escalation designs are available in the statistical literature. Despite this, the translation from statistical design to practical application is slow, as has been highlighted in single-agent phase I trials, where a 3 + 3 rule-based design is often still used. To expedite this process, new dose-escalation designs need to be not only scientifically beneficial but also easy to understand and implement by clinicians. In this paper, we propose a curve-free (nonparametric) design for a dual-agent trial in which the model parameters are the probabilities of toxicity at each of the dose combinations. We show that it is relatively trivial for a clinician's prior beliefs or historical information to be incorporated in the model and updating is fast and computationally simple through the use of conjugate Bayesian inference. Monotonicity is ensured by considering only a set of monotonic contours for the distribution of the maximum tolerated contour, which defines the dose-escalation decision process. Varied experimentation around the contour is achievable, and multiple dose combinations can be recommended to take forward to phase II. Code for R, Stata and Excel are available for implementation. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
The global impact distribution of Near-Earth objects
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.
2016-02-01
Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.
Reconstruction of Porous Media with Multiple Solid Phases
Losic; Thovert; Adler
1997-02-15
A process is proposed to generate three-dimensional multiphase porous media with fixed phase probabilities and an overall correlation function. By varying the parameters, a specific phase can be located either at the interface between two phases or within a single phase. When the interfacial phase has a relatively small probability, its shape can be chosen as granular or lamellar. The influence of a third phase on the macroscopic conductivity of a medium is illustrated.
Granular segregation driven by particle interactions.
Lozano, C; Zuriguel, I; Garcimartín, A; Mullin, T
2015-05-01
We report the results of an experimental study of particle-particle interactions in a horizontally shaken granular layer that undergoes a second order phase transition from a binary gas to a segregation liquid as the packing fraction C is increased. By focusing on the behavior of individual particles, the effect of C is studied on (1) the process of cluster formation, (2) cluster dynamics, and (3) cluster destruction. The outcomes indicate that the segregation is driven by two mechanisms: attraction between particles with the same properties and random motion with a characteristic length that is inversely proportional to C. All clusters investigated are found to be transient and the probability distribution functions of the separation times display a power law tail, indicating that the splitting probability decreases with time.
Velocity statistics of the Nagel-Schreckenberg model
NASA Astrophysics Data System (ADS)
Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael
2016-02-01
The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.
Velocity statistics of the Nagel-Schreckenberg model.
Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael
2016-02-01
The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.
Fitzpatrick, Faith A.; Arnold, Terri L.; Colman, John A.
1998-01-01
Geochemical data for the upper Illinois River Basin are presented for concentrations of 39 elements in streambed sediment collected by the U.S. Geological Survey in the fall of 1987. These data were collected as part of the pilot phase of the National Water-Quality Assessment Program. A total of 372 sites were sampled, with 238 sites located on first- and second-order streams, and 134 sites located on main stems. Spatial distribution maps and exceedance probability plots are presented for aluminum, antimony, arsenic, barium, beryllium, boron, cadmium, calcium, carbon (total, inorganic, and organic), cerium, chromium, cobalt, copper, gallium, iron, lanthanum, lead, lithium, magnesium, manganese, mercury, molybdenum, neodymium, nickel, niobium, phosphorus, potassium, scandium, selenium, silver, sodium, strontium, sulfur, thorium, titanium, uranium, vanadium, yttrium, and zinc. For spatial distribution maps, concentrations of the elements are grouped into four ranges bounded by the minimum concentration, the 10th, 50th, and 90th percentiles, and the maximum concentrations. These ranges were selected to highlight streambed sediment with very low or very high element concentrations relative to the rest of the streambed sediment in the upper Illinois River Basin. Exceedance probability plots for each element display the differences, if any, in distributions between high- and low-order streams and may be helpful in determining differences between background and elevated concentrations.
Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.
Gajewski, Byron J; Mayo, Matthew S
2006-08-15
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.
Evolution of axis ratios from phase space dynamics of triaxial collapse
NASA Astrophysics Data System (ADS)
Nadkarni-Ghosh, Sharvari; Arya, Bhaskar
2018-04-01
We investigate the evolution of axis ratios of triaxial haloes using the phase space description of triaxial collapse. In this formulation, the evolution of the triaxial ellipsoid is described in terms of the dynamics of eigenvalues of three important tensors: the Hessian of the gravitational potential, the tensor of velocity derivatives, and the deformation tensor. The eigenvalues of the deformation tensor are directly related to the parameters that describe triaxiality, namely, the minor-to-major and intermediate-to-major axes ratios (s and q) and the triaxiality parameter T. Using the phase space equations, we evolve the eigenvalues and examine the evolution of the probability distribution function (PDF) of the axes ratios as a function of mass scale and redshift for Gaussian initial conditions. We find that the ellipticity and prolateness increase with decreasing mass scale and decreasing redshift. These trends agree with previous analytic studies but differ from numerical simulations. However, the PDF of the scaled parameter {\\tilde{q}} = (q-s)/(1-s) follows a universal distribution over two decades in mass range and redshifts which is in qualitative agreement with the universality for conditional PDF reported in simulations. We further show using the phase space dynamics that, in fact, {\\tilde{q}} is a phase space invariant and is conserved individually for each halo. These results demonstrate that the phase space analysis is a useful tool that provides a different perspective on the evolution of perturbations and can be applied to more sophisticated models in the future.
Sampling design trade-offs in occupancy studies with imperfect detection: examples and software
Bailey, L.L.; Hines, J.E.; Nichols, J.D.
2007-01-01
Researchers have used occupancy, or probability of occupancy, as a response or state variable in a variety of studies (e.g., habitat modeling), and occupancy is increasingly favored by numerous state, federal, and international agencies engaged in monitoring programs. Recent advances in estimation methods have emphasized that reliable inferences can be made from these types of studies if detection and occupancy probabilities are simultaneously estimated. The need for temporal replication at sampled sites to estimate detection probability creates a trade-off between spatial replication (number of sample sites distributed within the area of interest/inference) and temporal replication (number of repeated surveys at each site). Here, we discuss a suite of questions commonly encountered during the design phase of occupancy studies, and we describe software (program GENPRES) developed to allow investigators to easily explore design trade-offs focused on particularities of their study system and sampling limitations. We illustrate the utility of program GENPRES using an amphibian example from Greater Yellowstone National Park, USA.
Dynamic properties of molecular motors in burnt-bridge models
NASA Astrophysics Data System (ADS)
Artyomov, Maxim N.; Morozov, Alexander Yu; Pronina, Ekaterina; Kolomeisky, Anatoly B.
2007-08-01
Dynamic properties of molecular motors that fuel their motion by actively interacting with underlying molecular tracks are studied theoretically via discrete-state stochastic 'burnt-bridge' models. The transport of the particles is viewed as an effective diffusion along one-dimensional lattices with periodically distributed weak links. When an unbiased random walker passes the weak link it can be destroyed ('burned') with probability p, providing a bias in the motion of the molecular motor. We present a theoretical approach that allows one to calculate exactly all dynamic properties of motor proteins, such as velocity and dispersion, under general conditions. It is found that dispersion is a decreasing function of the concentration of bridges, while the dependence of dispersion on the burning probability is more complex. Our calculations also show a gap in dispersion for very low concentrations of weak links or for very low burning probabilities which indicates a dynamic phase transition between unbiased and biased diffusion regimes. Theoretical findings are supported by Monte Carlo computer simulations.
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Analysis of the cycle-to-cycle pressure distribution variations in dynamic stall
NASA Astrophysics Data System (ADS)
Harms, Tanner; Nikoueeyan, Pourya; Naughton, Jonathan
2017-11-01
Dynamic stall is an unsteady flow phenomenon observed on blades and wings that, despite decades of focused study, remains a challenging problem for rotorcraft and wind turbine applications. Traditionally, dynamic stall has been studied on pitch-oscillating airfoils by measuring the unsteady pressure distribution that is phase-averaged, by which the typical flow pattern may be observed and quantified. In cases where light to deep dynamic stall are observed, pressure distributions with high levels of variance are present in regions of separation. It was recently observed that, under certain conditions, this scatter may be the result of a two-state flow solution - as if there were a bifurcation in the unsteady pressure distribution behavior on the suction side of the airfoil. This is significant since phase-averaged dynamic stall data are often used to tune dynamic stall models and for validation of simulations of dynamic stall. In order to better understand this phenomenon, statistical analysis of the pressure data using probability density functions (PDFs) and other statistical approaches has been carried out for the SC 1094R8, DU97-W-300, and NACA 0015 airfoil geometries. This work uses airfoil data acquired under Army contract W911W60160C-0021, DOE Grant DE-SC0001261, and a gift from BP Alternative Energy North America, Inc.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
NASA Astrophysics Data System (ADS)
Monthus, Cécile; Garel, Thomas
2007-03-01
We consider the low temperature T
Using sketch-map coordinates to analyze and bias molecular dynamics simulations
Tribello, Gareth A.; Ceriotti, Michele; Parrinello, Michele
2012-01-01
When examining complex problems, such as the folding of proteins, coarse grained descriptions of the system drive our investigation and help us to rationalize the results. Oftentimes collective variables (CVs), derived through some chemical intuition about the process of interest, serve this purpose. Because finding these CVs is the most difficult part of any investigation, we recently developed a dimensionality reduction algorithm, sketch-map, that can be used to build a low-dimensional map of a phase space of high-dimensionality. In this paper we discuss how these machine-generated CVs can be used to accelerate the exploration of phase space and to reconstruct free-energy landscapes. To do so, we develop a formalism in which high-dimensional configurations are no longer represented by low-dimensional position vectors. Instead, for each configuration we calculate a probability distribution, which has a domain that encompasses the entirety of the low-dimensional space. To construct a biasing potential, we exploit an analogy with metadynamics and use the trajectory to adaptively construct a repulsive, history-dependent bias from the distributions that correspond to the previously visited configurations. This potential forces the system to explore more of phase space by making it desirable to adopt configurations whose distributions do not overlap with the bias. We apply this algorithm to a small model protein and succeed in reproducing the free-energy surface that we obtain from a parallel tempering calculation. PMID:22427357
Species survival and scaling laws in hostile and disordered environments
NASA Astrophysics Data System (ADS)
Rocha, Rodrigo P.; Figueiredo, Wagner; Suweis, Samir; Maritan, Amos
2016-10-01
In this work we study the likelihood of survival of single-species in the context of hostile and disordered environments. Population dynamics in this environment, as modeled by the Fisher equation, is characterized by negative average growth rate, except in some random spatially distributed patches that may support life. In particular, we are interested in the phase diagram of the survival probability and in the critical size problem, i.e., the minimum patch size required for surviving in the long-time dynamics. We propose a measure for the critical patch size as being proportional to the participation ratio of the eigenvector corresponding to the largest eigenvalue of the linearized Fisher dynamics. We obtain the (extinction-survival) phase diagram and the probability distribution function (PDF) of the critical patch sizes for two topologies, namely, the one-dimensional system and the fractal Peano basin. We show that both topologies share the same qualitative features, but the fractal topology requires higher spatial fluctuations to guarantee species survival. We perform a finite-size scaling and we obtain the associated scaling exponents. In addition, we show that the PDF of the critical patch sizes has an universal shape for the 1D case in terms of the model parameters (diffusion, growth rate, etc.). In contrast, the diffusion coefficient has a drastic effect on the PDF of the critical patch sizes of the fractal Peano basin, and it does not obey the same scaling law of the 1D case.
Phased models for evaluating the performability of computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.; Meyer, J. F.
1979-01-01
A phase-by-phase modelling technique is introduced to evaluate a fault tolerant system's ability to execute different sets of computational tasks during different phases of the control process. Intraphase processes are allowed to differ from phase to phase. The probabilities of interphase state transitions are specified by interphase transition matrices. Based on constraints imposed on the intraphase and interphase transition probabilities, various iterative solution methods are developed for calculating system performability.
Nonadditive entropies yield probability distributions with biases not warranted by the data.
Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A
2013-11-01
Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.
The optical communication link outage probability in satellite formation flying
NASA Astrophysics Data System (ADS)
Arnon, Shlomi; Gill, Eberhard
2014-02-01
In recent years, several space systems consisting of multiple satellites flying in close formation have been proposed for various purposes such as interferometric synthetic aperture radar measurement (TerraSAR-X and the TanDEM-X), detecting extra-solar earth-like planets (Terrestrial Planet Finder-TPF and Darwin), and demonstrating distributed space systems (DARPA F6 project). Another important purpose, which is the concern of this paper, is for improving radio frequency communication to mobile terrestrial and maritime subscribers. In this case, radio frequency signals from several satellites coherently combine such that the received/transmit signal strength is increased proportionally with the number of satellites in the formation. This increase in signal strength allows to enhance the communication data rate and/or to reduce energy consumption and the antenna size of terrestrial mobile users' equipment. However, a coherent combination of signals without aligning the phases of the individual communication signals interrupts the communication and outage link between the satellites and the user. The accuracy of the phase estimation is a function of the inter-satellite laser ranging system performance. This paper derives an outage probability model of a coherent combination communication system as a function of the pointing vibration and jitter statistics of an inter-satellite laser ranging system tool. The coherent combination probability model, which could be used to improve the communication to mobile subscribers in air, sea and ground is the main importance of this work.
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta
2017-02-15
Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less
A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caflisch, R; Wang, C; Dimarco, G
2007-10-09
If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulombmore » collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.« less
NASA Astrophysics Data System (ADS)
Mukherjee, Sudip; Rajak, Atanu; Chakrabarti, Bikas K.
2018-02-01
We explore the behavior of the order parameter distribution of the quantum Sherrington-Kirkpatrick model in the spin glass phase using Monte Carlo technique for the effective Suzuki-Trotter Hamiltonian at finite temperatures and that at zero temperature obtained using the exact diagonalization method. Our numerical results indicate the existence of a low- but finite-temperature quantum-fluctuation-dominated ergodic region along with the classical fluctuation-dominated high-temperature nonergodic region in the spin glass phase of the model. In the ergodic region, the order parameter distribution gets narrower around the most probable value of the order parameter as the system size increases. In the other region, the Parisi order distribution function has nonvanishing value everywhere in the thermodynamic limit, indicating nonergodicity. We also show that the average annealing time for convergence (to a low-energy level of the model, within a small error range) becomes system size independent for annealing down through the (quantum-fluctuation-dominated) ergodic region. It becomes strongly system size dependent for annealing through the nonergodic region. Possible finite-size scaling-type behavior for the extent of the ergodic region is also addressed.
An LES-PBE-PDF approach for modeling particle formation in turbulent reacting flows
NASA Astrophysics Data System (ADS)
Sewerin, Fabian; Rigopoulos, Stelios
2017-10-01
Many chemical and environmental processes involve the formation of a polydispersed particulate phase in a turbulent carrier flow. Frequently, the immersed particles are characterized by an intrinsic property such as the particle size, and the distribution of this property across a sample population is taken as an indicator for the quality of the particulate product or its environmental impact. In the present article, we propose a comprehensive model and an efficient numerical solution scheme for predicting the evolution of the property distribution associated with a polydispersed particulate phase forming in a turbulent reacting flow. Here, the particulate phase is described in terms of the particle number density whose evolution in both physical and particle property space is governed by the population balance equation (PBE). Based on the concept of large eddy simulation (LES), we augment the existing LES-transported probability density function (PDF) approach for fluid phase scalars by the particle number density and obtain a modeled evolution equation for the filtered PDF associated with the instantaneous fluid composition and particle property distribution. This LES-PBE-PDF approach allows us to predict the LES-filtered fluid composition and particle property distribution at each spatial location and point in time without any restriction on the chemical or particle formation kinetics. In view of a numerical solution, we apply the method of Eulerian stochastic fields, invoking an explicit adaptive grid technique in order to discretize the stochastic field equation for the number density in particle property space. In this way, sharp moving features of the particle property distribution can be accurately resolved at a significantly reduced computational cost. As a test case, we consider the condensation of an aerosol in a developed turbulent mixing layer. Our investigation not only demonstrates the predictive capabilities of the LES-PBE-PDF model but also indicates the computational efficiency of the numerical solution scheme.
Three paths toward the quantum angle operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gazeau, Jean Pierre, E-mail: gazeau@apc.univ-paris7.fr; Szafraniec, Franciszek Hugon, E-mail: franciszek.szafraniec@uj.edu.pl
2016-12-15
We examine mathematical questions around angle (or phase) operator associated with a number operator through a short list of basic requirements. We implement three methods of construction of quantum angle. The first one is based on operator theory and parallels the definition of angle for the upper half-circle through its cosine and completed by a sign inversion. The two other methods are integral quantization generalizing in a certain sense the Berezin–Klauder approaches. One method pertains to Weyl–Heisenberg integral quantization of the plane viewed as the phase space of the motion on the line. It depends on a family of “weight”more » functions on the plane. The third method rests upon coherent state quantization of the cylinder viewed as the phase space of the motion on the circle. The construction of these coherent states depends on a family of probability distributions on the line.« less
Evaluation of a locally homogeneous model of spray evaporation
NASA Technical Reports Server (NTRS)
Shearer, A. J.; Faeth, G. M.; Tamura, H.
1978-01-01
Measurements were conducted on an evaporating spray in a stagnant environment. The spray was formed using an air-atomizing injector to yield a Sauter mean diameter of the order of 30 microns. The region where evaporation occurred extended approximately 1 m from the injector for the test conditions. Profiles of mean velocity, temperature, composition, and drop size distribution, as well as velocity fluctuations and Reynolds stress, were measured. The results are compared with a locally homogeneous two-phase flow model which implies no velocity difference and thermodynamic equilibrium between the phases. The flow was represented by a k-epsilon-g turbulence model employing a clipped Gaussian probability density function for mixture fraction fluctuations. The model provides a good representation of earlier single-phase jet measurements, but generally overestimates the rate of development of the spray. Using the model predictions to represent conditions along the centerline of the spray, drop life-history calculations were conducted which indicate that these discrepancies are due to slip and loss of thermodynamic equilibrium between the phases.
Incorporating Skew into RMS Surface Roughness Probability Distribution
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
Interactive Model Visualization for NET-VISA
NASA Astrophysics Data System (ADS)
Kuzma, H. A.; Arora, N. S.
2013-12-01
NET-VISA is a probabilistic system developed for seismic network processing of data measured on the International Monitoring System (IMS) of the Comprehensive nuclear Test Ban Treaty Organization (CTBTO). NET-VISA is composed of a Generative Model (GM) and an Inference Algorithm (IA). The GM is an explicit mathematical description of the relationships between various factors in seismic network analysis. Some of the relationships inside the GM are deterministic and some are statistical. Statistical relationships are described by probability distributions, the exact parameters of which (such as mean and standard deviation) are found by training NET-VISA using recent data. The IA uses the GM to evaluate the probability of various events and associations, searching for the seismic bulletin which has the highest overall probability and is consistent with a given set of measured arrivals. An Interactive Model Visualization tool (IMV) has been developed which makes 'peeking into' the GM simple and intuitive through a web-based interfaced. For example, it is now possible to access the probability distributions for attributes of events and arrivals such as the detection rate for each station for each of 14 phases. It also clarifies the assumptions and prior knowledge that are incorporated into NET-VISA's event determination. When NET-VISA is retrained, the IMV will be a visual tool for quality control both as a means of testing that the training has been accomplished correctly and that the IMS network has not changed unexpectedly. A preview of the IMV will be shown at this poster presentation. Homepage for the IMV IMV shows current model file and reference image.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
Decoy-state quantum key distribution with biased basis choice
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999
Decoy-state quantum key distribution with biased basis choice.
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2017-02-01
The scientific methodology based on two descriptive levels, ontic (reality as it is) and epistemic (observational), is briefly presented. Following Schrödinger, we point to the possible gap between these two descriptions. Our main aim is to show that, although ontic entities may be unaccessible for observations, they can be useful for clarification of the physical nature of operational epistemic entities. We illustrate this thesis by the concrete example: starting with the concrete ontic model preceding quantum mechanics (the latter is treated as an epistemic model), namely, prequantum classical statistical field theory (PCSFT), we propose the natural physical interpretation for the basic quantum mechanical entity-the quantum state ("wave function"). The correspondence PCSFT ↦ QM is not straightforward, it couples the covariance operators of classical (prequantum) random fields with the quantum density operators. We use this correspondence to clarify the physical meaning of the pure quantum state and the superposition principle-by using the formalism of classical field correlations. In classical mechanics the phase space description can be considered as the ontic description, here states are given by points λ =(x , p) of phase space. The dynamics of the ontic state is given by the system of Hamiltonian equations.We can also consider probability distributions on the phase space (or equivalently random variables valued in it). We call them probabilistic ontic states. Dynamics of probabilistic ontic states is given by the Liouville equation.In classical physics we can (at least in principle) measure both the coordinate and momentum and hence ontic states can be treated as epistemic states as well (or it is better to say that here epistemic states can be treated as ontic states). Probabilistic ontic states represent probabilities for outcomes of joint measurement of position and momentum.However, this was a very special, although very important, example of description of physical phenomena. In general there are no reasons to expect that properties of ontic states are approachable through our measurements. There is a gap between ontic and epistemic descriptions, cf. also with 't Hooft [49,50] and G G. Groessing et al. [51]. In general the presence of such a gap also implies unapproachability of the probabilistic ontic states, i.e., probability distributions on the space of ontic states. De Broglie [28] called such probability distributions hidden probabilities and distinguished them sharply from probability distributions of measurements outcomes, see also Lochak [29]. (The latter distributions are described by the quantum formalism.)This ontic-epistemic approach based on the combination of two descriptive levels for natural phenomena is closely related to the old Bild conception which was originated in the works of Hertz. Later it was heavily explored by Schrödinger in the quantum domain, see, e.g., [8,11] for detailed analysis. According to Hertz one cannot expect to construct a complete theoretical model based explicitly on observable quantities. The complete theoretical model can contain quantities which are unapproachable for external measurement inspection. For example, Hertz by trying to create a mechanical model for Maxwell's electromagnetism invented hidden masses. The main distinguishing property of a theoretical model (in contrast to an observational model) is the continuity of description, i.e., the absence of gaps in description. From this viewpoint, the quantum mechanical description is not continuous: there is a gap between premeasurement dynamics and the measurement outcome. QM cannot say anything what happens in the process of measurement, this is the well known measurement problem of QM [32], cf. [52,53]. Continuity of description is closely related to causality. However, here we cannot go in more detail, see [8,11].The important question is about interrelation between two levels of description, ontic-epistemic (or theoretical-observational). In the introduction we have already cited Schrödinger who emphasized the possible complexity of this interrelation. In particular, in general there is no reason to expect a straightforward coupling of the form, cf. [9,10]:
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Statistical properties of two sine waves in Gaussian noise.
NASA Technical Reports Server (NTRS)
Esposito, R.; Wilson, L. R.
1973-01-01
A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).
Effect of thermal noise on vesicles and capsules in shear flow.
Abreu, David; Seifert, Udo
2012-07-01
We add thermal noise consistently to reduced models of undeformable vesicles and capsules in shear flow and derive analytically the corresponding stochastic equations of motion. We calculate the steady-state probability distribution function and construct the corresponding phase diagrams for the different dynamical regimes. For fluid vesicles, we predict that at small shear rates thermal fluctuations induce a tumbling motion for any viscosity contrast. For elastic capsules, due to thermal mixing, an intermittent regime appears in regions where deterministic models predict only pure tank treading or tumbling.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Stochastic summation of empirical Green's functions
Wennerberg, Leif
1990-01-01
Two simple strategies are presented that use random delay times for repeatedly summing the record of a relatively small earthquake to simulate the effects of a larger earthquake. The simulations do not assume any fault plane geometry or rupture dynamics, but realy only on the ω−2 spectral model of an earthquake source and elementary notions of source complexity. The strategies simulate ground motions for all frequencies within the bandwidth of the record of the event used as a summand. The first strategy, which introduces the basic ideas, is a single-stage procedure that consists of simply adding many small events with random time delays. The probability distribution for delays has the property that its amplitude spectrum is determined by the ratio of ω−2 spectra, and its phase spectrum is identically zero. A simple expression is given for the computation of this zero-phase scaling distribution. The moment rate function resulting from the single-stage simulation is quite simple and hence is probably not realistic for high-frequency (>1 Hz) ground motion of events larger than ML∼ 4.5 to 5. The second strategy is a two-stage summation that simulates source complexity with a few random subevent delays determined using the zero-phase scaling distribution, and then clusters energy around these delays to get an ω−2 spectrum for the sum. Thus, the two-stage strategy allows simulations of complex events of any size for which the ω−2 spectral model applies. Interestingly, a single-stage simulation with too few ω−2records to get a good fit to an ω−2 large-event target spectrum yields a record whose spectral asymptotes are consistent with the ω−2 model, but that includes a region in its spectrum between the corner frequencies of the larger and smaller events reasonably approximated by a power law trend. This spectral feature has also been discussed as reflecting the process of partial stress release (Brune, 1970), an asperity failure (Boatwright, 1984), or the breakdown of ω−2 scaling due to rupture significantly longer than the width of the seismogenic zone (Joyner, 1984).
Sekiguchi, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi
2016-01-01
Coherent X-ray diffraction imaging (CXDI) is one of the techniques used to visualize structures of non-crystalline particles of micrometer to submicrometer size from materials and biological science. In the structural analysis of CXDI, the electron density map of a sample particle can theoretically be reconstructed from a diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction is difficult because diffraction patterns are affected by Poisson noise and miss data in small-angle regions due to the beam stop and the saturation of detector pixels. In contrast to X-ray protein crystallography, in which the phases of diffracted waves are experimentally estimated, phase retrieval in CXDI relies entirely on the computational procedure driven by the PR algorithms. Thus, objective criteria and methods to assess the accuracy of retrieved electron density maps are necessary in addition to conventional parameters monitoring the convergence of PR calculations. Here, a data analysis scheme, named ASURA, is proposed which selects the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a diffraction pattern. Each electron density map composed of J pixels is expressed as a point in a J-dimensional space. Principal component analysis is applied to describe characteristics in the distribution of the maps in the J-dimensional space. When the distribution is characterized by a small number of principal components, the distribution is classified using the k-means clustering method. The classified maps are evaluated by several parameters to assess the quality of the maps. Using the proposed scheme, structure analysis of a diffraction pattern from a non-crystalline particle is conducted in two stages: estimation of the overall shape and determination of the fine structure inside the support shape. In each stage, the most accurate and probable density maps are objectively selected. The validity of the proposed scheme is examined by application to diffraction data that were obtained from an aggregate of metal particles and a biological specimen at the XFEL facility SACLA using custom-made diffraction apparatus.
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Comparison of heavy-ion transport simulations: Collision integral in a box
NASA Astrophysics Data System (ADS)
Zhang, Ying-Xun; Wang, Yong-Jia; Colonna, Maria; Danielewicz, Pawel; Ono, Akira; Tsang, Manyee Betty; Wolter, Hermann; Xu, Jun; Chen, Lie-Wen; Cozma, Dan; Feng, Zhao-Qing; Das Gupta, Subal; Ikeno, Natsumi; Ko, Che-Ming; Li, Bao-An; Li, Qing-Feng; Li, Zhu-Xia; Mallik, Swagata; Nara, Yasushi; Ogawa, Tatsuhiko; Ohnishi, Akira; Oliinychenko, Dmytro; Papa, Massimo; Petersen, Hannah; Su, Jun; Song, Taesoo; Weil, Janus; Wang, Ning; Zhang, Feng-Shou; Zhang, Zhen
2018-03-01
Simulations by transport codes are indispensable to extract valuable physical information from heavy-ion collisions. In order to understand the origins of discrepancies among different widely used transport codes, we compare 15 such codes under controlled conditions of a system confined to a box with periodic boundary, initialized with Fermi-Dirac distributions at saturation density and temperatures of either 0 or 5 MeV. In such calculations, one is able to check separately the different ingredients of a transport code. In this second publication of the code evaluation project, we only consider the two-body collision term; i.e., we perform cascade calculations. When the Pauli blocking is artificially suppressed, the collision rates are found to be consistent for most codes (to within 1 % or better) with analytical results, or completely controlled results of a basic cascade code. In orderto reach that goal, it was necessary to eliminate correlations within the same pair of colliding particles that can be present depending on the adopted collision prescription. In calculations with active Pauli blocking, the blocking probability was found to deviate from the expected reference values. The reason is found in substantial phase-space fluctuations and smearing tied to numerical algorithms and model assumptions in the representation of phase space. This results in the reduction of the blocking probability in most transport codes, so that the simulated system gradually evolves away from the Fermi-Dirac toward a Boltzmann distribution. Since the numerical fluctuations are weaker in the Boltzmann-Uehling-Uhlenbeck codes, the Fermi-Dirac statistics is maintained there for a longer time than in the quantum molecular dynamics codes. As a result of this investigation, we are able to make judgements about the most effective strategies in transport simulations for determining the collision probabilities and the Pauli blocking. Investigation in a similar vein of other ingredients in transport calculations, like the mean-field propagation or the production of nucleon resonances and mesons, will be discussed in the future publications.
NASA Astrophysics Data System (ADS)
Li, Y.; Gong, H.; Zhu, L.; Guo, L.; Gao, M.; Zhou, C.
2016-12-01
Continuous over-exploitation of groundwater causes dramatic drawdown, and leads to regional land subsidence in the Huairou Emergency Water Resources region, which is located in the up-middle part of the Chaobai river basin of Beijing. Owing to the spatial heterogeneity of strata's lithofacies of the alluvial fan, ground deformation has no significant positive correlation with groundwater drawdown, and one of the challenges ahead is to quantify the spatial distribution of strata's lithofacies. The transition probability geostatistics approach provides potential for characterizing the distribution of heterogeneous lithofacies in the subsurface. Combined the thickness of clay layer extracted from the simulation, with deformation field acquired from PS-InSAR technology, the influence of strata's lithofacies on land subsidence can be analyzed quantitatively. The strata's lithofacies derived from borehole data were generalized into four categories and their probability distribution in the observe space was mined by using the transition probability geostatistics, of which clay was the predominant compressible material. Geologically plausible realizations of lithofacies distribution were produced, accounting for complex heterogeneity in alluvial plain. At a particular probability level of more than 40 percent, the volume of clay defined was 55 percent of the total volume of strata's lithofacies. This level, equaling nearly the volume of compressible clay derived from the geostatistics, was thus chosen to represent the boundary between compressible and uncompressible material. The method incorporates statistical geological information, such as distribution proportions, average lengths and juxtaposition tendencies of geological types, mainly derived from borehole data and expert knowledge, into the Markov chain model of transition probability. Some similarities of patterns were indicated between the spatial distribution of deformation field and clay layer. In the area with roughly similar water table decline, locations in the subsurface having a higher probability for the existence of compressible material occur more than that in the location with a lower probability. Such estimate of spatial probability distribution is useful to analyze the uncertainty of land subsidence.
The exact probability distribution of the rank product statistics for replicated experiments.
Eisinga, Rob; Breitling, Rainer; Heskes, Tom
2013-03-18
The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Gosling, J. T.; Asbridge, J. R.; Bame, S. J.; Feldman, W. C.; Zwickl, R. D.; Paschmann, G.; Sckopke, N.; Hynds, R. J.
1981-01-01
An ion velocity distribution function of the postshock phase of an energetic storm particle (ESP) event is obtained from data from the ISEE 2 and ISEE 3 experiments. The distribution function is roughly isotropic in the solar wind frame from solar wind thermal energies to 1.6 MeV. The ESP event studied (8/27/78) is superposed upon a more energetic particle event which was predominantly field-aligned and which was probably of solar origin. The observations suggest that the ESP population is accelerated directly out of the solar wind thermal population or its quiescent suprathermal tail by a stochastic process associated with shock wave disturbance. The acceleration mechanism is sufficiently efficient so that approximately 1% of the solar wind population is accelerated to suprathermal energies. These suprathermal particles have an energy density of approximately 290 eV cubic centimeters.
Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm
NASA Astrophysics Data System (ADS)
Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun
2017-02-01
We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.
NASA Astrophysics Data System (ADS)
Sims, Elizabeth M.
In order to study the impact of climate change on the Earth's hydrologic cycle, global information about snowfall is needed. To achieve global measurements of snowfall over both land and ocean, satellites are necessary. While satellites provide the best option for making measurements on a global scale, the task of estimating snowfall rate from these measurements is a complex problem. Satellite-based radar, for example, measures effective radar reflectivity, Ze, which can be converted to snowfall rate, S, via a Ze-S relation. Choosing the appropriate Ze-S relation to apply is a complicated problem, however, because quantities such as particle shape, size distribution, and terminal velocity are often unknown, and these quantities directly affect the Ze-S relation. Additionally, it is important to correctly classify the phase of precipitation. A misclassification can result in order-of-magnitude errors in the estimated precipitation rate. Using global ground-based observations over multiple years, the influence of different geophysical parameters on precipitation phase is investigated, with the goal of obtaining an improved method for determining precipitation phase. The parameters studied are near-surface air temperature, atmospheric moisture, low-level vertical temperature lapse rate, surface skin temperature, surface pressure, and land cover type. To combine the effects of temperature and moisture, wet-bulb temperature, instead of air temperature, is used as a key parameter for separating solid and liquid precipitation. Results show that in addition to wet-bulb temperature, vertical temperature lapse rate also affects the precipitation phase. For example, at a near-surface wet-bulb temperature of 0°C, a lapse rate of 6°C km-1 results in an 86 percent conditional probability of solid precipitation, while a lapse rate of -2°C km-1 results in a 45 percent probability. For near-surface wet-bulb temperatures less than 0°C, skin temperature affects precipitation phase, although the effect appears to be minor. Results also show that surface pressure appears to influence precipitation phase in some cases, however, this dependence is not clear on a global scale. Land cover type does not appear to affect precipitation phase. Based on these findings, a parameterization scheme has been developed that accepts available meteorological data as input, and returns the conditional probability of solid precipitation. Ze-S relations for various particle shapes, size distributions, and terminal velocities have been developed as part of this research. These Ze-S relations have been applied to radar reflectivity data from the CloudSat Cloud Profiling Radar to calculate the annual mean snowfall rate. The calculated snowfall rates are then compared to surface observations of snowfall. An effort to determine which particle shape best represents the type of snow falling in various locations across the United States has been made. An optimized Ze-S relation has been developed, which combines multiple Ze-S relations in order to minimize error when compared to the surface snowfall observations. Additionally, the resulting surface snowfall rate is compared with the CloudSat standard product for snowfall rate.
Modeling the probability distribution of peak discharge for infiltrating hillslopes
NASA Astrophysics Data System (ADS)
Baiamonte, Giorgio; Singh, Vijay P.
2017-07-01
Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.
Cellular Analysis of Boltzmann Most Probable Ideal Gas Statistics
NASA Astrophysics Data System (ADS)
Cahill, Michael E.
2018-04-01
Exact treatment of Boltzmann's Most Probable Statistics for an Ideal Gas of Identical Mass Particles having Translational Kinetic Energy gives a Distribution Law for Velocity Phase Space Cell j which relates the Particle Energy and the Particle Population according toB e(j) = A - Ψ(n(j) + 1)where A & B are the Lagrange Multipliers and Ψ is the Digamma Function defined byΨ(x + 1) = d/dx ln(x!)A useful sufficiently accurate approximation for Ψ is given byΨ(x +1) ≈ ln(e-γ + x)where γ is the Euler constant (≈.5772156649) & so the above distribution equation is approximatelyB e(j) = A - ln(e-γ + n(j))which can be inverted to solve for n(j) givingn(j) = (eB (eH - e(j)) - 1) e-γwhere B eH = A + γ& where B eH is a unitless particle energy which replaces the parameter A. The 2 approximate distribution equations imply that eH is the highest particle energy and the highest particle population isnH = (eB eH - 1) e-γwhich is due to the facts that population becomes negative if e(j) > eH and kinetic energy becomes negative if n(j) > nH.An explicit construction of Cells in Velocity Space which are equal in volume and homogeneous for almost all cells is shown to be useful in the analysis.Plots for sample distribution properties using e(j) as the independent variable are presented.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
Stochastic description of geometric phase for polarized waves in random media
NASA Astrophysics Data System (ADS)
Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent
2013-01-01
We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.
Pore-scale Simulation and Imaging of Multi-phase Flow and Transport in Porous Media (Invited)
NASA Astrophysics Data System (ADS)
Crawshaw, J.; Welch, N.; Daher, I.; Yang, J.; Shah, S.; Grey, F.; Boek, E.
2013-12-01
We combine multi-scale imaging and computer simulation of multi-phase flow and reactive transport in rock samples to enhance our fundamental understanding of long term CO2 storage in rock formations. The imaging techniques include Confocal Laser Scanning Microscopy (CLSM), micro-CT and medical CT scanning, with spatial resolutions ranging from sub-micron to mm respectively. First, we report a new sample preparation technique to study micro-porosity in carbonates using CLSM in 3 dimensions. Second, we use micro-CT scanning to generate high resolution 3D pore space images of carbonate and cap rock samples. In addition, we employ micro-CT to image the processes of evaporation in fractures and cap rock degradation due to exposure to CO2 flow. Third, we use medical CT scanning to image spontaneous imbibition in carbonate rock samples. Our imaging studies are complemented by computer simulations of multi-phase flow and transport, using the 3D pore space images obtained from the scanning experiments. We have developed a massively parallel lattice-Boltzmann (LB) code to calculate the single phase flow field in these pore space images. The resulting flow fields are then used to calculate hydrodynamic dispersion using a novel scheme to predict probability distributions for molecular displacements using the LB method and a streamline algorithm, modified for optimal solid boundary conditions. We calculate solute transport on pore-space images of rock cores with increasing degree of heterogeneity: a bead pack, Bentheimer sandstone and Portland carbonate. We observe that for homogeneous rock samples, such as bead packs, the displacement distribution remains Gaussian with time increasing. In the more heterogeneous rocks, on the other hand, the displacement distribution develops a stagnant part. We observe that the fraction of trapped solute increases from the beadpack (0 %) to Bentheimer sandstone (1.5 %) to Portland carbonate (8.1 %), in excellent agreement with PFG-NMR experiments. We then use our preferred multi-phase model to directly calculate flow in pore space images of two different sandstones and observe excellent agreement with experimental relative permeabilities. Also we calculate cluster size distributions in good agreement with experimental studies. Our analysis shows that the simulations are able to predict both multi-phase flow and transport properties directly on large 3D pore space images of real rocks. Pore space images, left and velocity distributions, right (Yang and Boek, 2013)
Convergence Time and Phase Transition in a Non-monotonic Family of Probabilistic Cellular Automata
NASA Astrophysics Data System (ADS)
Ramos, A. D.; Leite, A.
2017-08-01
In dynamical systems, some of the most important questions are related to phase transitions and convergence time. We consider a one-dimensional probabilistic cellular automaton where their components assume two possible states, zero and one, and interact with their two nearest neighbors at each time step. Under the local interaction, if the component is in the same state as its two neighbors, it does not change its state. In the other cases, a component in state zero turns into a one with probability α , and a component in state one turns into a zero with probability 1-β . For certain values of α and β , we show that the process will always converge weakly to δ 0, the measure concentrated on the configuration where all the components are zeros. Moreover, the mean time of this convergence is finite, and we describe an upper bound in this case, which is a linear function of the initial distribution. We also demonstrate an application of our results to the percolation PCA. Finally, we use mean-field approximation and Monte Carlo simulations to show coexistence of three distinct behaviours for some values of parameters α and β.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
NASA Astrophysics Data System (ADS)
Chakrabarti, R.; Yogesh, V.
2016-04-01
We study the evolution of the hybrid entangled states in a bipartite (ultra) strongly coupled qubit-oscillator system. Using the generalized rotating wave approximation the reduced density matrices of the qubit and the oscillator are obtained. The reduced density matrix of the oscillator yields the phase space quasi probability distributions such as the diagonal P-representation, the Wigner W-distribution and the Husimi Q-function. In the strong coupling regime the Q-function evolves to uniformly separated macroscopically distinct Gaussian peaks representing ‘kitten’ states at certain specified times that depend on multiple time scales present in the interacting system. The ultrastrong coupling strength of the interaction triggers appearance of a large number of modes that quickly develop a randomization of their phase relationships. A stochastic averaging of the dynamical quantities sets in, and leads to the decoherence of the system. The delocalization in the phase space of the oscillator is studied by using the Wehrl entropy. The negativity of the W-distribution reflects the departure of the oscillator from the classical states, and allows us to study the underlying differences between various information-theoretic measures such as the Wehrl entropy and the Wigner entropy. Other features of nonclassicality such as the existence of the squeezed states and appearance of negative values of the Mandel parameter are realized during the course of evolution of the bipartite system. In the parametric regime studied here these properties do not survive in the time-averaged limit.
Stochastic Computations in Cortical Microcircuit Models
Maass, Wolfgang
2013-01-01
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126
Foraging patterns in online searches.
Wang, Xiangwen; Pleimling, Michel
2017-03-01
Nowadays online searches are undeniably the most common form of information gathering, as witnessed by billions of clicks generated each day on search engines. In this work we describe online searches as foraging processes that take place on the semi-infinite line. Using a variety of quantities like probability distributions and complementary cumulative distribution functions of step length and waiting time as well as mean square displacements and entropies, we analyze three different click-through logs that contain the detailed information of millions of queries submitted to search engines. Notable differences between the different logs reveal an increased efficiency of the search engines. In the language of foraging, the newer logs indicate that online searches overwhelmingly yield local searches (i.e., on one page of links provided by the search engines), whereas for the older logs the foraging processes are a combination of local searches and relocation phases that are power law distributed. Our investigation of click logs of search engines therefore highlights the presence of intermittent search processes (where phases of local explorations are separated by power law distributed relocation jumps) in online searches. It follows that good search engines enable the users to find the information they are looking for through a local exploration of a single page with search results, whereas for poor search engine users are often forced to do a broader exploration of different pages.
Foraging patterns in online searches
NASA Astrophysics Data System (ADS)
Wang, Xiangwen; Pleimling, Michel
2017-03-01
Nowadays online searches are undeniably the most common form of information gathering, as witnessed by billions of clicks generated each day on search engines. In this work we describe online searches as foraging processes that take place on the semi-infinite line. Using a variety of quantities like probability distributions and complementary cumulative distribution functions of step length and waiting time as well as mean square displacements and entropies, we analyze three different click-through logs that contain the detailed information of millions of queries submitted to search engines. Notable differences between the different logs reveal an increased efficiency of the search engines. In the language of foraging, the newer logs indicate that online searches overwhelmingly yield local searches (i.e., on one page of links provided by the search engines), whereas for the older logs the foraging processes are a combination of local searches and relocation phases that are power law distributed. Our investigation of click logs of search engines therefore highlights the presence of intermittent search processes (where phases of local explorations are separated by power law distributed relocation jumps) in online searches. It follows that good search engines enable the users to find the information they are looking for through a local exploration of a single page with search results, whereas for poor search engine users are often forced to do a broader exploration of different pages.
Cost-effective solutions to maintaining smart grid reliability
NASA Astrophysics Data System (ADS)
Qin, Qiu
As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1973-01-01
An all digital phase locked loop which tracks the phase of the incoming sinusoidal signal once per carrier cycle is proposed. The different elements and their functions and the phase lock operation are explained in detail. The nonlinear difference equations which govern the operation of the digital loop when the incoming signal is embedded in white Gaussian noise are derived, and a suitable model is specified. The performance of the digital loop is considered for the synchronization of a sinusoidal signal. For this, the noise term is suitably modelled which allows specification of the output probabilities for the two level quantizer in the loop at any given phase error. The loop filter considered increases the probability of proper phase correction. The phase error states in modulo two-pi forms a finite state Markov chain which enables the calculation of steady state probabilities, RMS phase error, transient response and mean time for cycle skipping.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
NASA Astrophysics Data System (ADS)
Lozovatsky, I.; Fernando, H. J. S.; Planella-Morato, J.; Liu, Zhiyu; Lee, J.-H.; Jinadasa, S. U. P.
2017-10-01
The probability distribution of turbulent kinetic energy dissipation rate in stratified ocean usually deviates from the classic lognormal distribution that has been formulated for and often observed in unstratified homogeneous layers of atmospheric and oceanic turbulence. Our measurements of vertical profiles of micro-scale shear, collected in the East China Sea, northern Bay of Bengal, to the south and east of Sri Lanka, and in the Gulf Stream region, show that the probability distributions of the dissipation rate ɛ˜r in the pycnoclines (r ˜ 1.4 m is the averaging scale) can be successfully modeled by the Burr (type XII) probability distribution. In weakly stratified boundary layers, lognormal distribution of ɛ˜r is preferable, although the Burr is an acceptable alternative. The skewness Skɛ and the kurtosis Kɛ of the dissipation rate appear to be well correlated in a wide range of Skɛ and Kɛ variability.
NASA Astrophysics Data System (ADS)
Fernandes, K.; Baethgen, W.; Verchot, L. V.; Giannini, A.; Pinedo-Vasquez, M.
2014-12-01
A complete assessment of climate change projections requires understanding the combined effects of decadal variability and long-term trends and evaluating the ability of models to simulate them. The western Amazon severe droughts of the 2000s were the result of a modest drying trend enhanced by reduced moisture transport from the tropical Atlantic. Most of the WA dry-season precipitation decadal variability is attributable to decadal fluctuations of the north-south gradient (NSG) in Atlantic sea surface temperature (SST). The observed WA and NSG decadal co-variability is well reproduced in Global Climate Models (GCMs) pre-industrial control (PIC) and historical (HIST) experiments that were part of the Intergovernmental Panel on Climate Change fifth assessment report (IPCC-AR5). This suggests that unforced or natural climate variability, characteristic of the PIC simulations, determines the nature of this coupling, as the results from HIST simulations (forced with greenhouse gases (GHG) and natural and anthropogenic aerosols) are comparable in magnitude and spatial distribution. Decadal fluctuation in the NSG also determines shifts in the probability of repeated droughts and pluvials in WA, as there is a 65% chance of 3 or more years of droughts per decade when NSG>0 compared to 18% when NSG<0. The HIST and PIC model simulations also reproduce the observed shifts in probability distribution of droughts and pluvials as a function of the NSG decadal phase, suggesting there is great potential for decadal predictability based on GCMs. Persistence of the current NSG positive phase may lead to continuing above normal frequencies of western Amazon dry-season droughts.
Impact of contrarians and intransigents in a kinetic model of opinion dynamics
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Blanco, Victor H.; Anteneodo, Celia
2014-01-01
In this work we study opinion formation on a fully connected population participating of a public debate with two distinct choices, where the agents may adopt three different attitudes (favorable to either one choice or to the other, or undecided). The interactions between agents occur by pairs and are competitive, with couplings that are either negative with probability p or positive with probability 1-p. This bimodal probability distribution of couplings produces a behavior similar to the one resulting from the introduction of Galam's contrarians in the population. In addition, we consider that a fraction d of the individuals are intransigent, that is, reluctant to change their opinions. The consequences of the presence of contrarians and intransigents are studied by means of computer simulations. Our results suggest that the presence of inflexible agents affects the critical behavior of the system, causing either the shift of the critical point or the suppression of the ordering phase transition, depending on the groups of opinions to which the intransigents belong. We also discuss the relevance of the model for real social systems.
Distributed micro-radar system for detection and tracking of low-profile, low-altitude targets
NASA Astrophysics Data System (ADS)
Gorwara, Ashok; Molchanov, Pavlo
2016-05-01
Proposed airborne surveillance radar system can detect, locate, track, and classify low-profile, low-altitude targets: from traditional fixed and rotary wing aircraft to non-traditional targets like unmanned aircraft systems (drones) and even small projectiles. Distributed micro-radar system is the next step in the development of passive monopulse direction finder proposed by Stephen E. Lipsky in the 80s. To extend high frequency limit and provide high sensitivity over the broadband of frequencies, multiple angularly spaced directional antennas are coupled with front end circuits and separately connected to a direction finder processor by a digital interface. Integration of antennas with front end circuits allows to exclude waveguide lines which limits system bandwidth and creates frequency dependent phase errors. Digitizing of received signals proximate to antennas allows loose distribution of antennas and dramatically decrease phase errors connected with waveguides. Accuracy of direction finding in proposed micro-radar in this case will be determined by time accuracy of digital processor and sampling frequency. Multi-band, multi-functional antennas can be distributed around the perimeter of a Unmanned Aircraft System (UAS) and connected to the processor by digital interface or can be distributed between swarm/formation of mini/micro UAS and connected wirelessly. Expendable micro-radars can be distributed by perimeter of defense object and create multi-static radar network. Low-profile, lowaltitude, high speed targets, like small projectiles, create a Doppler shift in a narrow frequency band. This signal can be effectively filtrated and detected with high probability. Proposed micro-radar can work in passive, monostatic or bistatic regime.
A model of jam formation in congested traffic
NASA Astrophysics Data System (ADS)
Bunzarova, N. Zh; Pesheva, N. C.; Priezzhev, V. B.; Brankov, J. G.
2017-12-01
We study a model of irreversible jam formation in congested vehicular traffic on an open segment of a single-lane road. The vehicles obey a stochastic discrete-time dynamics which is a limiting case of the generalized Totally Asymmetric Simple Exclusion Process. Its characteristic features are: (a) the existing clusters of jammed cars cannot break into parts; (b) when the leading vehicle of a cluster hops to the right, the whole cluster follows it deterministically, and (c) any two clusters of vehicles, occupying consecutive positions on the chain, may become nearest-neighbors and merge irreversibly into a single cluster. The above dynamics was used in a one-dimensional model of irreversible aggregation by Bunzarova and Pesheva [Phys. Rev. E 95, 052105 (2017)]. The model has three stationary non-equilibrium phases, depending on the probabilities of injection (α), ejection (β), and hopping (p) of particles: a many-particle one, MP, a completely jammed phase CF, and a mixed MP+CF phase. An exact expression for the stationary probability P(1) of a completely jammed configuration in the mixed MP+CF phase is obtained. The gap distribution between neighboring clusters of jammed cars at large lengths L of the road is studied. Three regimes of evolution of the width of a single gap are found: (i) growing gaps with length of the order O(L) when β > p; (ii) shrinking gaps with length of the order O(1) when β < p; and (iii) critical gaps at β = p, of the order O(L 1/2). These results are supported by extensive Monte Carlo calculations.
2016-01-01
The Central Balkans region is of great importance for understanding the spread of the Neolithic in Europe but the Early Neolithic population dynamics of the region is unknown. In this study we apply the method of summed calibrated probability distributions to a set of published radiocarbon dates from the Republic of Serbia in order to reconstruct population dynamics in the Early Neolithic in this part of the Central Balkans. The results indicate that there was a significant population growth after ~6200 calBC, when the Neolithic was introduced into the region, followed by a bust at the end of the Early Neolithic phase (~5400 calBC). These results are broadly consistent with the predictions of the Neolithic Demographic Transition theory and the patterns of population booms and busts detected in other regions of Europe. These results suggest that the cultural process that underlies the patterns observed in Central and Western Europe was also in operation in the Central Balkan Neolithic and that the population increase component of this process can be considered as an important factor for the spread of the Neolithic as envisioned in the demic diffusion hypothesis. PMID:27508413
Space-time thermodynamics of the glass transition
NASA Astrophysics Data System (ADS)
Merolle, Mauro; Garrahan, Juan P.; Chandler, David
2005-08-01
We consider the probability distribution for fluctuations in dynamical action and similar quantities related to dynamic heterogeneity. We argue that the so-called “glass transition” is a manifestation of low action tails in these distributions where the entropy of trajectory space is subextensive in time. These low action tails are a consequence of dynamic heterogeneity and an indication of phase coexistence in trajectory space. The glass transition, where the system falls out of equilibrium, is then an order-disorder phenomenon in space-time occurring at a temperature Tg, which is a weak function of measurement time. We illustrate our perspective ideas with facilitated lattice models and note how these ideas apply more generally. Author contributions: M.M., J.P.G., and D.C. performed research and wrote the paper.
1978-03-01
for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented
NASA Astrophysics Data System (ADS)
Li, Jingying; Bai, Lu; Wu, Zhensen; Guo, Lixin; Gong, Yanjun
2017-11-01
In this paper, diffusion limited aggregation (DLA) algorithm is improved to generate the alumina particle cluster with different radius of monomers in the plume. Scattering properties of these alumina clusters are solved by the multiple sphere T matrix method (MSTM). The effect of the number and radius of monomers on the scattering properties of clusters of alumina particles is discussed. The scattering properties of two types of alumina particle clusters are compared, one has different radius of monomers that follows lognormal probability distribution, another has the same radius of monomers that equals the mean of lognormal probability distribution. The result show that the scattering phase functions and linear polarization degrees of these two types of alumina particle clusters are of great differences. For the alumina clusters with different radius of monomers, the forward scatterings are bigger and the linear polarization degree has multiple peaks. Moreover, the vary of their scattering properties do not have strong correlative with the change of number of monomers. For larger booster motors, 25-38% of the plume being condensed alumina. The alumina can scatter radiation from other sources present in the plume and effect on radiation transfer characteristics of plume. In addition, the shape, size distribution and refractive index of the particles in the plume are estimated by linear polarization degree. Therefore, accurate scattering properties calculation is very important to decrease the deviation in the related research.
Quantification of type I error probabilities for heterogeneity LOD scores.
Abreu, Paula C; Hodge, Susan E; Greenberg, David A
2002-02-01
Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.
Universality of the Volume Bound in Slow-Roll Eternal Inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubovsky, Sergei; Senatore, Leonardo; Villadoro, Giovanni
2012-03-28
It has recently been shown that in single field slow-roll inflation the total volume cannot grow by a factor larger than e{sup S{sub dS}/2} without becoming infinite. The bound is saturated exactly at the phase transition to eternal inflation where the probability to produce infinite volume becomes non zero. We show that the bound holds sharply also in any space-time dimensions, when arbitrary higher-dimensional operators are included and in the multi-field inflationary case. The relation with the entropy of de Sitter and the universality of the bound strengthen the case for a deeper holographic interpretation. As a spin-off we providemore » the formalism to compute the probability distribution of the volume after inflation for generic multi-field models, which might help to address questions about the population of vacua of the landscape during slow-roll inflation.« less
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
Effects of ultrashort laser pulses on angular distributions of photoionization spectra.
Ooi, C H Raymond; Ho, W L; Bandrauk, A D
2017-07-27
We study the photoelectron spectra by intense laser pulses with arbitrary time dependence and phase within the Keldysh framework. An efficient semianalytical approach using analytical transition matrix elements for hydrogenic atoms in any initial state enables efficient and accurate computation of the photoionization probability at any observation point without saddle point approximation, providing comprehensive three dimensional photoelectron angular distribution for linear and elliptical polarizations, that reveal the intricate features and provide insights on the photoionization characteristics such as angular dispersions, shift and splitting of photoelectron peaks from the tunneling or above threshold ionization(ATI) regime to non-adiabatic(intermediate) and multiphoton ionization(MPI) regimes. This facilitates the study of the effects of various laser pulse parameters on the photoelectron spectra and their angular distributions. The photoelectron peaks occur at multiples of 2ħω for linear polarization while odd-ordered peaks are suppressed in the direction perpendicular to the electric field. Short pulses create splitting and angular dispersion where the peaks are strongly correlated to the angles. For MPI and elliptical polarization with shorter pulses the peaks split into doublets and the first peak vanishes. The carrier envelope phase(CEP) significantly affects the ATI spectra while the Stark effect shifts the spectra of intermediate regime to higher energies due to interference.
Power law scaling in synchronization of brain signals depends on cognitive load.
Tinker, Jesse; Velazquez, Jose Luis Perez
2014-01-01
As it has several features that optimize information processing, it has been proposed that criticality governs the dynamics of nervous system activity. Indications of such dynamics have been reported for a variety of in vitro and in vivo recordings, ranging from in vitro slice electrophysiology to human functional magnetic resonance imaging. However, there still remains considerable debate as to whether the brain actually operates close to criticality or in another governing state such as stochastic or oscillatory dynamics. A tool used to investigate the criticality of nervous system data is the inspection of power-law distributions. Although the findings are controversial, such power-law scaling has been found in different types of recordings. Here, we studied whether there is a power law scaling in the distribution of the phase synchronization derived from magnetoencephalographic recordings during executive function tasks performed by children with and without autism. Characterizing the brain dynamics that is different between autistic and non-autistic individuals is important in order to find differences that could either aid diagnosis or provide insights as to possible therapeutic interventions in autism. We report in this study that power law scaling in the distributions of a phase synchrony index is not very common and its frequency of occurrence is similar in the control and the autism group. In addition, power law scaling tends to diminish with increased cognitive load (difficulty or engagement in the task). There were indications of changes in the probability distribution functions for the phase synchrony that were associated with a transition from power law scaling to lack of power law (or vice versa), which suggests the presence of phenomenological bifurcations in brain dynamics associated with cognitive load. Hence, brain dynamics may fluctuate between criticality and other regimes depending upon context and behaviors.
Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick
2012-01-01
Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension
NASA Astrophysics Data System (ADS)
Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek
2018-04-01
We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2015-01-01
During inactive phases of Madden-Julian Oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES (Clouds and the Earth's Radiant Energy System) observations between July 2006 and June 2010 for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index, which assigns the tropics to one of the eight MJO phases each day. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The criteria for defining these cloud types are overcast footprints and cloud top pressures less than 400 hPa, but DC has higher cloud optical depths (=10) than those of CS (<10). The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation speeds/directions.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
Towards an accurate real-time locator of infrasonic sources
NASA Astrophysics Data System (ADS)
Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.
2017-11-01
Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability distributions of the phase arrival time picks. To illustrate the improvements in both computation time and location accuracy achieved, we compare location results for the new algorithms, previously published BISL-type algorithms and the least-squares location technique. This comparison is provided via a case study of different typical spatial data distributions and statistical experiment using the database of 36 ground-truth explosions from the Utah Test and Training Range (UTTR) recorded during the US summer season at USArray transportable seismic stations when they were near the site between 2006 and 2008.
ERIC Educational Resources Information Center
Conant, Darcy Lynn
2013-01-01
Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…
E-O Sensor Signal Recognition Simulation: Computer Code SPOT I.
1978-10-01
scattering phase function PDCO , defined at the specified wavelength, given for each of the scattering angles defined. Currently, a maximum of sixty-four...PHASE MATRIX DATA IS DEFINED PDCO AVERAGE PROBABILITY FOR PHASE MATRIX DEFINITION NPROB PROBLEM NUMBER 54 Fig. 12. FLOWCHART for the SPOT Computer Code...El0.1 WLAM(N) Wavelength at which the aerosol single-scattering phase function set is defined (microns) 3 8El0.1 PDCO (N,I) Average probability for
Probability distributions of the electroencephalogram envelope of preterm infants.
Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro
2015-06-01
To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
Wiley, Daniel A; Strogatz, Steven H; Girvan, Michelle
2006-03-01
We suggest a new line of research that we hope will appeal to the nonlinear dynamics community, especially the readers of this Focus Issue. Consider a network of identical oscillators. Suppose the synchronous state is locally stable but not globally stable; it competes with other attractors for the available phase space. How likely is the system to synchronize, starting from a random initial condition? And how does the probability of synchronization depend on the way the network is connected? On the one hand, such questions are inherently difficult because they require calculation of a global geometric quantity, the size of the "sync basin" (or, more formally, the measure of the basin of attraction for the synchronous state). On the other hand, these questions are wide open, important in many real-world settings, and approachable by numerical experiments on various combinations of dynamical systems and network topologies. To give a case study in this direction, we report results on the sync basin for a ring of n > 1 identical phase oscillators with sinusoidal coupling. Each oscillator interacts equally with its k nearest neighbors on either side. For k/n greater than a critical value (approximately 0.34, obtained analytically), we show that the sync basin is the whole phase space, except for a set of measure zero. As k/n passes below this critical value, coexisting attractors are born in a well-defined sequence. These take the form of uniformly twisted waves, each characterized by an integer winding number q, the number of complete phase twists in one circuit around the ring. The maximum stable twist is proportional to n/k; the constant of proportionality is also obtained analytically. For large values of n/k, corresponding to large rings or short-range coupling, many different twisted states compete for their share of phase space. Our simulations reveal that their basin sizes obey a tantalizingly simple statistical law: the probability that the final state has q twists follows a Gaussian distribution with respect to q. Furthermore, as n/k increases, the standard deviation of this distribution grows linearly with square root of n/k. We have been unable to explain either of these last two results by anything beyond a hand-waving argument.
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2014-01-01
Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
NASA Astrophysics Data System (ADS)
Braides, Andrea; Causin, Andrea; Piatnitski, Andrey; Solci, Margherita
2018-06-01
We consider randomly distributed mixtures of bonds of ferromagnetic and antiferromagnetic type in a two-dimensional square lattice with probability 1-p and p, respectively, according to an i.i.d. random variable. We study minimizers of the corresponding nearest-neighbour spin energy on large domains in Z^2. We prove that there exists p_0 such that for p≤ p_0 such minimizers are characterized by a majority phase; i.e., they take identically the value 1 or - 1 except for small disconnected sets. A deterministic analogue is also proved.
NASA Astrophysics Data System (ADS)
Braides, Andrea; Causin, Andrea; Piatnitski, Andrey; Solci, Margherita
2018-04-01
We consider randomly distributed mixtures of bonds of ferromagnetic and antiferromagnetic type in a two-dimensional square lattice with probability 1-p and p, respectively, according to an i.i.d. random variable. We study minimizers of the corresponding nearest-neighbour spin energy on large domains in Z^2 . We prove that there exists p_0 such that for p≤p_0 such minimizers are characterized by a majority phase; i.e., they take identically the value 1 or - 1 except for small disconnected sets. A deterministic analogue is also proved.
Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.
2016-01-01
Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150
Ramirez, Abelardo; Foxall, William
2014-05-28
Stochastic inversions of InSAR data were carried out to assess the probability that pressure perturbations resulting from CO 2 injection into well KB-502 at In Salah penetrated into the lower caprock seal above the reservoir. Inversions of synthetic data were employed to evaluate the factors that affect the vertical resolution of overpressure distributions, and to assess the impact of various sources of uncertainty in prior constraints on inverse solutions. These include alternative pressure-driven deformation modes within reservoir and caprock, the geometry of a sub-vertical fracture zone in the caprock identified in previous studies, and imperfect estimates of the rock mechanicalmore » properties. Inversions of field data indicate that there is a high probability that a pressure perturbation during the first phase of injection extended upwards along the fracture zone ~ 150 m above the reservoir, and less than 50% probability that it reached the Hot Shale unit at 1500 m depth. Within the uncertainty bounds considered, it was concluded that it is very unlikely that the pressure perturbation approached within 150 m of the top of the lower caprock at the Hercynian Unconformity. The results are consistent with previous deterministic inversion and forward modeling studies.« less
NASA Astrophysics Data System (ADS)
Franchi, Fulvio; Turetta, Clara; Cavalazzi, Barbara; Corami, Fabiana; Barbieri, Roberto
2016-08-01
Trace and rare earth elements (REEs) have proven their utility as tools for assessing the genesis and early diagenesis of widespread geological bodies such as carbonate mounds, whose genetic processes are not yet fully understood. Carbonates from the Middle Devonian conical mud mounds of the Maïder Basin (eastern Anti-Atlas, Morocco) have been analysed for their REE and trace element distribution. Collectively, the carbonates from the Maïder Basin mud mounds appear to display coherent REE patterns. Three different geochemical patterns, possibly related with three different diagenetic events, include: i) dyke fills with a normal marine REE pattern probably precipitated in equilibrium with seawater, ii) mound micrite with a particular enrichment of overall REE contents and variable Ce anomaly probably related to variation of pH, increase of alkalinity or dissolution/remineralization of organic matter during early diagenesis, and iii) haematite-rich vein fills precipitated from venting fluids of probable hydrothermal origin. Our results reinforce the hypothesis that these mounds were probably affected by an early diagenesis induced by microbial activity and triggered by abundance of dispersed organic matter, whilst venting may have affected the mounds during a later diagenetic phase.
NASA Astrophysics Data System (ADS)
Xu, Jinghai; An, Jiwen; Nie, Gaozong
2016-04-01
Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.
Stylized facts in internal rates of return on stock index and its derivative transactions
NASA Astrophysics Data System (ADS)
Pichl, Lukáš; Kaizoji, Taisei; Yamano, Takuya
2007-08-01
Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in normalized log returns, the probability distributions for such single asset encounters incorporate the time factor by means of the internal rate of return, defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting difference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be useful in deducing the type of investment strategy from trading revenues of small portfolio investors.
Probabilistic Reasoning for Robustness in Automated Planning
NASA Technical Reports Server (NTRS)
Schaffer, Steven; Clement, Bradley; Chien, Steve
2007-01-01
A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.
NASA Technical Reports Server (NTRS)
Shapiro, Jeffrey H.
1992-01-01
Phase measurements on a single-mode radiation field are examined from a system-theoretic viewpoint. Quantum estimation theory is used to establish the primacy of the Susskind-Glogower (SG) phase operator; its phase eigenkets generate the probability operator measure (POM) for maximum likelihood phase estimation. A commuting observables description for the SG-POM on a signal x apparatus state space is derived. It is analogous to the signal-band x image-band formulation for optical heterodyne detection. Because heterodyning realizes the annihilation operator POM, this analogy may help realize the SG-POM. The wave function representation associated with the SG POM is then used to prove the duality between the phase measurement and the number operator measurement, from which a number-phase uncertainty principle is obtained, via Fourier theory, without recourse to linearization. Fourier theory is also employed to establish the principle of number-ket causality, leading to a Paley-Wiener condition that must be satisfied by the phase-measurement probability density function (PDF) for a single-mode field in an arbitrary quantum state. Finally, a two-mode phase measurement is shown to afford phase-conjugate quantum communication at zero error probability with finite average photon number. Application of this construct to interferometric precision measurements is briefly discussed.
Ejiri, Shinji; Yamada, Norikazu
2013-04-26
Towards the feasibility study of the electroweak baryogenesis in realistic technicolor scenario, we investigate the phase structure of (2+N(f))-flavor QCD, where the mass of two flavors is fixed to a small value and the others are heavy. For the baryogenesis, an appearance of a first-order phase transition at finite temperature is a necessary condition. Using a set of configurations of two-flavor lattice QCD and applying the reweighting method, the effective potential defined by the probability distribution function of the plaquette is calculated in the presence of additional many heavy flavors. Through the shape of the effective potential, we determine the critical mass of heavy flavors separating the first-order and crossover regions and find it to become larger with N(f). We moreover study the critical line at finite density and the first-order region is found to become wider as increasing the chemical potential. Possible applications to real (2+1)-flavor QCD are discussed.
Mathematical Model to estimate the wind power using four-parameter Burr distribution
NASA Astrophysics Data System (ADS)
Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu
2018-03-01
When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Work probability distribution for a ferromagnet with long-ranged and short-ranged correlations
NASA Astrophysics Data System (ADS)
Bhattacharjee, J. K.; Kirkpatrick, T. R.; Sengers, J. V.
2018-04-01
Work fluctuations and work probability distributions are fundamentally different in systems with short-ranged versus long-ranged correlations. Specifically, in systems with long-ranged correlations the work distribution is extraordinarily broad compared to systems with short-ranged correlations. This difference profoundly affects the possible applicability of fluctuation theorems like the Jarzynski fluctuation theorem. The Heisenberg ferromagnet, well below its Curie temperature, is a system with long-ranged correlations in very low magnetic fields due to the presence of Goldstone modes. As the magnetic field is increased the correlations gradually become short ranged. Hence, such a ferromagnet is an ideal system for elucidating the changes of the work probability distribution as one goes from a domain with long-ranged correlations to a domain with short-ranged correlations by tuning the magnetic field. A quantitative analysis of this crossover behavior of the work probability distribution and the associated fluctuations is presented.
Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken
2013-11-01
Elicitation is a technique that can be used to obtain probability distribution from experts about unknown quantities. We conducted a methodology review of reports where probability distributions had been elicited from experts to be used in model-based health technology assessments. Databases including MEDLINE, EMBASE and the CRD database were searched from inception to April 2013. Reference lists were checked and citation mapping was also used. Studies describing their approach to the elicitation of probability distributions were included. Data was abstracted on pre-defined aspects of the elicitation technique. Reports were critically appraised on their consideration of the validity, reliability and feasibility of the elicitation exercise. Fourteen articles were included. Across these studies, the most marked features were heterogeneity in elicitation approach and failure to report key aspects of the elicitation method. The most frequently used approaches to elicitation were the histogram technique and the bisection method. Only three papers explicitly considered the validity, reliability and feasibility of the elicitation exercises. Judged by the studies identified in the review, reports of expert elicitation are insufficient in detail and this impacts on the perceived usability of expert-elicited probability distributions. In this context, the wider credibility of elicitation will only be improved by better reporting and greater standardisation of approach. Until then, the advantage of eliciting probability distributions from experts may be lost.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
Oparin, Roman D; Moreau, Myriam; De Walle, Isabelle; Paolantoni, Marco; Idrissi, Abdenacer; Kiselev, Michael G
2015-09-18
The aim of this paper is to characterize the distribution of paracetamol conformers which are dissolved in a supercritical CO2 phase being in equilibrium with their corresponding crystalline form. The quantum calculations and molecular dynamics simulations were used in order to characterize the structure and analyze the vibration spectra of the paracetamol conformers in vacuum and in a mixture with CO2 at various thermodynamic state parameters (p,T). The metadynamics approach was applied to efficiently sample the various conformers of paracetamol. Furthermore, using in situ IR spectroscopy, the conformers that are dissolved in supercritical CO2 were identified and the evolution of the probability of their presence as a functions of thermodynamic condition was quantified while the change in the crystalline form of paracetamol have been monitored by DSC, micro IR and Raman techniques. The DSC analysis as well as micro IR and Raman spectroscopic studies of the crystalline paracetamol show that the subsequent heating up above the melting temperature of the polymorph I of paracetamol and the cooling down to room temperature in the presence of supercritical CO2 induces the formation of polymorph II. The in situ IR investigation shows that two conformers (Conf. 1 and Conf. 2) are present in the phase of CO2 while conformer 3 (Conf. 3) has a high probability to be present after re-crystallization. Copyright © 2015. Published by Elsevier B.V.
An evaluation of procedures to estimate monthly precipitation probabilities
NASA Astrophysics Data System (ADS)
Legates, David R.
1991-01-01
Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.
Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard
2014-02-01
Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations
NASA Astrophysics Data System (ADS)
Katz, Yuri A.; Tian, Li
2013-10-01
We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1
NASA Astrophysics Data System (ADS)
Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen
2017-05-01
In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay pdelay, whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.
Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen
2017-05-01
In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay p delay , whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.
Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.
1979-01-01
Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Astrophysics Data System (ADS)
Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad
2017-10-01
The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.
Collisional Dynamics of the Cesium D1 and D2 Transitions
2010-09-01
37 14. Comparison of Phase Changing Probability and Polarizability ...Phase Changing Probability and Polarizability for D2 Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 25...theoretically determined the values for broadening and shift rates for cesium with Argon , Krypton, and Xenon from the interatomic potentials [27]. The rates
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Work probability distribution and tossing a biased coin
NASA Astrophysics Data System (ADS)
Saha, Arnab; Bhattacharjee, Jayanta K.; Chakraborty, Sagar
2011-01-01
We show that the rare events present in dissipated work that enters Jarzynski equality, when mapped appropriately to the phenomenon of large deviations found in a biased coin toss, are enough to yield a quantitative work probability distribution for the Jarzynski equality. This allows us to propose a recipe for constructing work probability distribution independent of the details of any relevant system. The underlying framework, developed herein, is expected to be of use in modeling other physical phenomena where rare events play an important role.
Halperin, Daniel M.; Lee, J. Jack; Dagohoy, Cecile Gonzales; Yao, James C.
2015-01-01
Purpose Despite a robust clinical trial enterprise and encouraging phase II results, the vast minority of oncologic drugs in development receive regulatory approval. In addition, clinicians occasionally make therapeutic decisions based on phase II data. Therefore, clinicians, investigators, and regulatory agencies require improved understanding of the implications of positive phase II studies. We hypothesized that prior probability of eventual drug approval was significantly different across GI cancers, with substantial ramifications for the predictive value of phase II studies. Methods We conducted a systematic search of phase II studies conducted between 1999 and 2004 and compared studies against US Food and Drug Administration and National Cancer Institute databases of approved indications for drugs tested in those studies. Results In all, 317 phase II trials were identified and followed for a median of 12.5 years. Following completion of phase III studies, eventual new drug application approval rates varied from 0% (zero of 45) in pancreatic adenocarcinoma to 34.8% (24 of 69) for colon adenocarcinoma. The proportion of drugs eventually approved was correlated with the disease under study (P < .001). The median type I error for all published trials was 0.05, and the median type II error was 0.1, with minimal variation. By using the observed median type I error for each disease, phase II studies have positive predictive values ranging from less than 1% to 90%, depending on primary site of the cancer. Conclusion Phase II trials in different GI malignancies have distinct prior probabilities of drug approval, yielding quantitatively and qualitatively different predictive values with similar statistical designs. Incorporation of prior probability into trial design may allow for more effective design and interpretation of phase II studies. PMID:26261263
NASA Astrophysics Data System (ADS)
Gulyaeva, Tamara; Stanislawska, Iwona; Arikan, Feza; Arikan, Orhan
The probability of occurrence of the positive and negative planetary ionosphere storms is evaluated using the W index maps produced from Global Ionospheric Maps of Total Electron Content, GIM-TEC, provided by Jet Propulsion Laboratory, and transformed from geographic coordinates to magnetic coordinates frame. The auroral electrojet AE index and the equatorial disturbance storm time Dst index are investigated as precursors of the global ionosphere storm. The superposed epoch analysis is performed for 77 intense storms (Dst≤-100 nT) and 227 moderate storms (-100
Hybrid computer technique yields random signal probability distributions
NASA Technical Reports Server (NTRS)
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
NASA Astrophysics Data System (ADS)
Polunin, Pavel M.
In this work we consider several nonlinearity-based and/or noise-related phenomena that have been recently observed in micro-electromechanical vibratory systems. The main goals are to closely examine these phenomena, develop an understanding of their underlying physics, derive techniques for characterizing parameters in relevant mathematical models, and determine ways to improve the performance of specific classes of micro-electromechanical systems (MEMS) used in applications. The general perspective of this work is based on the fact that nonlinearity and noise represent integral parts of the models needed to describe the response of these systems, and the focus is on situations where these generally undesirable features can be utilized or accounted for in design. We consider three different, but related, topics in this general area. The first topic uses the slowly varying states in a rotating frame of reference where we analyze the stationary probability distribution of a nonlinear parametrically-driven resonator subjected to Poisson pulses and thermal noise. We show that Poisson pulses with low pulse rates, as compared with the resonator decay rate, cause a power-law divergence of the probability density at the resonator equilibrium in both the underdamped (overdamped) regimes, in which the response does (does not) spiral in the rotating frame. We have also found that the shape of the probability distribution away from the equilibrium position is qualitatively different for the overdamped and underdamped cases. In particular, in the overdamped regime, the form of the secondary singularity in the probability distribution depends strongly on the reference phase of the resonator response and the pulse modulation phase, while in the underdamped regime several singular peaks occur in the distribution, and their locations are determined by the resonator frequency and decay rate in the rotating frame. Finally, we show that even weak Gaussian noise smoothens out the singular peaks in the probability distribution. The theoretical results are successfully compared experimental results obtained from collaborators at the Hong Kong University of Science and Technology. Second, we discuss a time-domain technique for characterizing parameters for models that describe the response of a single vibrational mode of micromechanical resonators with symmetric restoring and damping forces. These parameters include coefficients of conservative and dissipative linear and nonlinear terms, as well as the strengths of various noise sources acting on the mode of interest. The method relies on measurements taken during a ringdown response, that is, free vibration, in which the nonlinearities result in an amplitude-dependent frequency and a non-exponential decay of the amplitude, while noise sources cause fluctuations in the resonator amplitude and phase. Analysis of the amplitude of the ringdown response allows one to estimate the quality factor and the dissipative nonlinearity, and the zero-crossing points in the ringdown measurement can be used to characterize the linear natural frequency and the cubic and quintic nonlinearities of the vibrational mode, which typically arise from a combination of mechanical and electrostatic effects. Additionally, we develop and demonstrate a statistical analysis of the zero-crossing points in the resonator response that allows one to separate the effects of additive, multiplicative, and measurement noises and estimate their corresponding intensities. These characterization methods are demonstrated using experimental measurements obtained from collaborators at Stanford University. Finally, we examine the problem of self-induced parametric amplification in ring/disk resonating gyroscopes. We model the dynamics of these gyroscopes by considering flexural (elliptical) vibrations of a thin elastic ring subjected to electrostatic transduction and show that the parametric amplification arises naturally from nonlinear intermodal coupling between the drive and sense modes of the gyroscope. Analysis shows that this coupling results in a substantial increase in the sensitivity of the gyroscope to the external angular rate. This improvement in the gyroscope performance depends strongly on both the modal coupling strength and the operating point of the gyroscope, features which depend on details of nonlinear kinematics of, and forces acting on, the ring. Using the results from this model, we explore ways to enhance the amplification effect by changing the shape of the resonator body and attendant electrodes, and by electrostatic tuning. These results suggest new designs for ring gyros, and a general approach for other geometries, such as disk-resonator-gyros (DRGs), that should offer significant improvements in device sensitivity.
Lorentzian symmetry predicts universality beyond scaling laws
NASA Astrophysics Data System (ADS)
Watson, Stephen J.
2017-06-01
We present a covariant theory for the ageing characteristics of phase-ordering systems that possess dynamical symmetries beyond mere scalings. A chiral spin dynamics which conserves the spin-up (+) and spin-down (-) fractions, μ+ and μ- , serves as the emblematic paradigm of our theory. Beyond a parabolic spatio-temporal scaling, we discover a hidden Lorentzian dynamical symmetry therein, and thereby prove that the characteristic length L of spin domains grows in time t according to L = \\fracβ{\\sqrt{1 - σ^2}}t\\frac{1{2}} , where σ:= μ+ - μ- (the invariant spin-excess) and β is a universal constant. Furthermore, the normalised length distributions of the spin-up and the spin-down domains each provably adopt a coincident universal (σ-independent) time-invariant form, and this supra-universal probability distribution is empirically verified to assume a form reminiscent of the Wigner surmise.
Nonclassical thermal-state superpositions: Analytical evolution law and decoherence behavior
NASA Astrophysics Data System (ADS)
Meng, Xiang-guo; Goan, Hsi-Sheng; Wang, Ji-suo; Zhang, Ran
2018-03-01
Employing the integration technique within normal products of bosonic operators, we present normal product representations of thermal-state superpositions and investigate their nonclassical features, such as quadrature squeezing, sub-Poissonian distribution, and partial negativity of the Wigner function. We also analytically and numerically investigate their evolution law and decoherence characteristics in an amplitude-decay model via the variations of the probability distributions and the negative volumes of Wigner functions in phase space. The results indicate that the evolution formulas of two thermal component states for amplitude decay can be viewed as the same integral form as a displaced thermal state ρ(V , d) , but governed by the combined action of photon loss and thermal noise. In addition, the larger values of the displacement d and noise V lead to faster decoherence for thermal-state superpositions.
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2014 CFR
2014-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2011 CFR
2011-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
How Can Histograms Be Useful for Introducing Continuous Probability Distributions?
ERIC Educational Resources Information Center
Derouet, Charlotte; Parzysz, Bernard
2016-01-01
The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…
ERIC Educational Resources Information Center
Moses, Tim; Oh, Hyeonjoo J.
2009-01-01
Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…
Esteve-Adell, Iván; Bakker, Nadia; Primo, Ana; Hensen, Emiel; García, Hermenegildo
2016-12-14
Pt nanoparticles (NPs) strongly grafted on few-layers graphene (G) have been prepared by pyrolysis under inert atmosphere at 900 °C of chitosan films (70-120 nm thickness) containing adsorbed H 2 PtCl 6 . Preferential orientation of exposed Pt facets was assessed by X-ray diffraction of films having high Pt loading where the 111 and 222 diffraction lines were observed and also by SEM imaging comparing elemental Pt mapping with the image of the 111 oriented particles. Characterization techniques allow determination of the Pt content (from 45 ng to 1 μg cm -2 , depending on the preparation conditions), particle size distribution (9 ± 2 nm), and thickness of the films (12-20 nm). Oriented Pt NPs on G exhibit at least 2 orders of magnitude higher catalytic activity for aqueous-phase reforming of ethylene glycol to H 2 and CO 2 compared to analogous samples of randomly oriented Pt NPs supported on preformed graphene. Oriented [Formula: see text]/fl-G undergoes deactivation upon reuse, the most probable cause being Pt particle growth, probably due to the presence of high concentrations of carboxylic acids acting as mobilizing agents during the course of the reaction.
Locked modes in two reversed-field pinch devices of different size and shell system
NASA Astrophysics Data System (ADS)
Malmberg, J.-A.; Brunsell, P. R.; Yagi, Y.; Koguchi, H.
2000-10-01
The behavior of locked modes in two reversed-field pinch devices, the Toroidal Pinch Experiment (TPE-RX) [Y. Yagi et al., Plasma Phys. Control. Fusion 41, 2552 (1999)] and Extrap T2 [J. R. Drake et al., in Plasma Physics and Controlled Nuclear Fusion Research 1996, Montreal (International Atomic Energy Agency, Vienna, 1996), Vol. 2, p. 193] is analyzed and compared. The main characteristics of the locked mode are qualitatively similar. The toroidal distribution of the mode locking shows that field errors play a role in both devices. The probability of phase locking is found to increase with increasing magnetic fluctuation levels in both machines. Furthermore, the probability of phase locking increases with plasma current in TPE-RX despite the fact that the magnetic fluctuation levels decrease. A comparison with computations using a theoretical model estimating the critical mode amplitude for locking [R. Fitzpatrick et al., Phys. Plasmas 6, 3878 (1999)] shows a good correlation with experimental results in TPE-RX. In Extrap T2, the magnetic fluctuations scale weakly with both plasma current and electron densities. This is also reflected in the weak scaling of the magnetic fluctuation levels with the Lundquist number (˜S-0.06). In TPE-RX, the corresponding scaling is ˜S-0.18.
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Wilks, Daniel S.
1993-10-01
Performance of 8 three-parameter probability distributions for representing annual extreme and partial duration precipitation data at stations in the northeastern and southeastern United States is investigated. Particular attention is paid to fidelity on the right tail, through use of a bootstrap procedure simulating extrapolation on the right tail beyond the data. It is found that the beta-κ distribution best describes the extreme right tail of annual extreme series, and the beta-P distribution is best for the partial duration data. The conventionally employed two-parameter Gumbel distribution is found to substantially underestimate probabilities associated with the larger precipitation amounts for both annual extreme and partial duration data. Fitting the distributions using left-censored data did not result in improved fits to the right tail.
The Schrödinger Equation, the Zero-Point Electromagnetic Radiation, and the Photoelectric Effect
NASA Astrophysics Data System (ADS)
França, H. M.; Kamimura, A.; Barreto, G. A.
2016-04-01
A Schrödinger type equation for a mathematical probability amplitude Ψ( x, t) is derived from the generalized phase space Liouville equation valid for the motion of a microscopic particle, with mass M and charge e, moving in a potential V( x). The particle phase space probability density is denoted Q( x, p, t), and the entire system is immersed in the "vacuum" zero-point electromagnetic radiation. We show, in the first part of the paper, that the generalized Liouville equation is reduced to a simpler Liouville equation in the equilibrium limit where the small radiative corrections cancel each other approximately. This leads us to a simpler Liouville equation that will facilitate the calculations in the second part of the paper. Within this second part, we address ourselves to the following task: Since the Schrödinger equation depends on hbar , and the zero-point electromagnetic spectral distribution, given by ρ 0{(ω )} = hbar ω 3/2 π 2 c3, also depends on hbar , it is interesting to verify the possible dynamical connection between ρ 0( ω) and the Schrödinger equation. We shall prove that the Planck's constant, present in the momentum operator of the Schrödinger equation, is deeply related with the ubiquitous zero-point electromagnetic radiation with spectral distribution ρ 0( ω). For simplicity, we do not use the hypothesis of the existence of the L. de Broglie matter-waves. The implications of our study for the standard interpretation of the photoelectric effect are discussed by considering the main characteristics of the phenomenon. We also mention, briefly, the effects of the zero-point radiation in the tunneling phenomenon and the Compton's effect.
Heat conduction in periodic laminates with probabilistic distribution of material properties
NASA Astrophysics Data System (ADS)
Ostrowski, Piotr; Jędrysiak, Jarosław
2017-04-01
This contribution deals with a problem of heat conduction in a two-phase laminate made of periodically distributed micro-laminas along one direction. In general, the Fourier's Law describing the heat conduction in a considered composite has highly oscillating and discontinuous coefficients. Therefore, the tolerance averaging technique (cf. Woźniak et al. in Thermomechanics of microheterogeneous solids and structures. Monografie - Politechnika Łódzka, Wydawnictwo Politechniki Łódzkiej, Łódź, 2008) is applied. Based on this technique, the averaged differential equations for a tolerance-asymptotic model are derived and solved analytically for given initial-boundary conditions. The second part of this contribution is an investigation of the effect of material properties ratio ω of two components on the total temperature field θ, by the assumption that conductivities of micro-laminas are not necessary uniquely described. Numerical experiments (Monte Carlo simulation) are executed under assumption that ω is a random variable with a fixed probability distribution. At the end, based on the obtained results, a crucial hypothesis is formulated.
Adaptive Detector Arrays for Optical Communications Receivers
NASA Technical Reports Server (NTRS)
Vilnrotter, V.; Srinivasan, M.
2000-01-01
The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.
A New Bond Albedo for Performing Orbital Debris Brightness to Size Transformations
NASA Technical Reports Server (NTRS)
Mulrooney, Mark K.; Matney, Mark J.
2008-01-01
We have developed a technique for estimating the intrinsic size distribution of orbital debris objects via optical measurements alone. The process is predicated on the empirically observed power-law size distribution of debris (as indicated by radar RCS measurements) and the log-normal probability distribution of optical albedos as ascertained from phase (Lambertian) and range-corrected telescopic brightness measurements. Since the observed distribution of optical brightness is the product integral of the size distribution of the parent [debris] population with the albedo probability distribution, it is a straightforward matter to transform a given distribution of optical brightness back to a size distribution by the appropriate choice of a single albedo value. This is true because the integration of a powerlaw with a log-normal distribution (Fredholm Integral of the First Kind) yields a Gaussian-blurred power-law distribution with identical power-law exponent. Application of a single albedo to this distribution recovers a simple power-law [in size] which is linearly offset from the original distribution by a constant whose value depends on the choice of the albedo. Significantly, there exists a unique Bond albedo which, when applied to an observed brightness distribution, yields zero offset and therefore recovers the original size distribution. For physically realistic powerlaws of negative slope, the proper choice of albedo recovers the parent size distribution by compensating for the observational bias caused by the large number of small objects that appear anomalously large (bright) - and thereby skew the small population upward by rising above the detection threshold - and the lower number of large objects that appear anomalously small (dim). Based on this comprehensive analysis, a value of 0.13 should be applied to all orbital debris albedo-based brightness-to-size transformations regardless of data source. Its prima fascia genesis, derived and constructed from the current RCS to size conversion methodology (SiBAM Size-Based Estimation Model) and optical data reduction standards, assures consistency in application with the prior canonical value of 0.1. Herein we present the empirical and mathematical arguments for this approach and by example apply it to a comprehensive set of photometric data acquired via NASA's Liquid Mirror Telescopes during the 2000-2001 observing season.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
NASA Astrophysics Data System (ADS)
Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao
2018-06-01
This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.
On the inequivalence of the CH and CHSH inequalities due to finite statistics
NASA Astrophysics Data System (ADS)
Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.
2017-06-01
Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.
Confidence as Bayesian Probability: From Neural Origins to Behavior.
Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F
2015-10-07
Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford Kuofei
Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less
Analysis of Geothermal Pathway in the Metamorphic Area, Northeastern Taiwan
NASA Astrophysics Data System (ADS)
Wang, C.; Wu, M. Y.; Song, S. R.; Lo, W.
2016-12-01
A quantitative measure by play fairway analysis in geothermal energy development is an important tool that can present the probability map of potential resources through the uncertainty studies in geology for early phase decision making purpose in the related industries. While source, pathway, and fluid are the three main geologic factors in traditional geothermal systems, identifying the heat paths is critical to reduce drilling cost. Taiwan is in East Asia and the western edge of Pacific Ocean, locating on the convergent boundary of Eurasian Plate and Philippine Sea Plate with many earthquake activities. This study chooses a metamorphic area in the western corner of Yi-Lan plain in northeastern Taiwan with high geothermal potential and several existing exploration sites. Having high subsurface temperature gradient from the mountain belts, and plenty hydrologic systems through thousands of millimeters annual precipitation that would bring up heats closer to the surface, current geothermal conceptual model indicates the importance of pathway distribution which affects the possible concentration of extractable heat location. The study conducts surface lineation analysis using analytic hierarchy process to determine weights among various fracture types for their roles in geothermal pathways, based on the information of remote sensing data, published geologic maps and field work measurements, to produce regional fracture distribution probability map. The results display how the spatial distribution of pathways through various fractures could affect geothermal systems, identify the geothermal plays using statistical data analysis, and compare against the existing drilling data.
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
Measurements of gas hydrate formation probability distributions on a quasi-free water droplet
NASA Astrophysics Data System (ADS)
Maeda, Nobuo
2014-06-01
A High Pressure Automated Lag Time Apparatus (HP-ALTA) can measure gas hydrate formation probability distributions from water in a glass sample cell. In an HP-ALTA gas hydrate formation originates near the edges of the sample cell and gas hydrate films subsequently grow across the water-guest gas interface. It would ideally be desirable to be able to measure gas hydrate formation probability distributions of a single water droplet or mist that is freely levitating in a guest gas, but this is technically challenging. The next best option is to let a water droplet sit on top of a denser, immiscible, inert, and wall-wetting hydrophobic liquid to avoid contact of a water droplet with the solid walls. Here we report the development of a second generation HP-ALTA which can measure gas hydrate formation probability distributions of a water droplet which sits on a perfluorocarbon oil in a container that is coated with 1H,1H,2H,2H-Perfluorodecyltriethoxysilane. It was found that the gas hydrate formation probability distributions of such a quasi-free water droplet were significantly lower than those of water in a glass sample cell.
Distinguishability notion based on Wootters statistical distance: Application to discrete maps
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.
2017-08-01
We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.
Quantifying the entropic cost of cellular growth control
NASA Astrophysics Data System (ADS)
De Martino, Daniele; Capuani, Fabrizio; De Martino, Andrea
2017-07-01
Viewing the ways a living cell can organize its metabolism as the phase space of a physical system, regulation can be seen as the ability to reduce the entropy of that space by selecting specific cellular configurations that are, in some sense, optimal. Here we quantify the amount of regulation required to control a cell's growth rate by a maximum-entropy approach to the space of underlying metabolic phenotypes, where a configuration corresponds to a metabolic flux pattern as described by genome-scale models. We link the mean growth rate achieved by a population of cells to the minimal amount of metabolic regulation needed to achieve it through a phase diagram that highlights how growth suppression can be as costly (in regulatory terms) as growth enhancement. Moreover, we provide an interpretation of the inverse temperature β controlling maximum-entropy distributions based on the underlying growth dynamics. Specifically, we show that the asymptotic value of β for a cell population can be expected to depend on (i) the carrying capacity of the environment, (ii) the initial size of the colony, and (iii) the probability distribution from which the inoculum was sampled. Results obtained for E. coli and human cells are found to be remarkably consistent with empirical evidence.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W.
An account is given of the method used to quantify the risks accruing to the use of a radioisotope thermoelectric generator fueled by Pu-238 dioxide aboard the Space Shuttle-launched Ulysses mission. After using a Monte Carlo technique to develop probability distributions for the radiological consequences of a range of accident scenarios throughout the mission, factors affecting those consequences are identified in conjunction with their probability distributions. The functional relationship among all the factors is then established, and probability distributions for all factor effects are combined by means of a Monte Carlo technique.
Score distributions of gapped multiple sequence alignments down to the low-probability tail
NASA Astrophysics Data System (ADS)
Fieth, Pascal; Hartmann, Alexander K.
2016-08-01
Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.
Using Geothermal Play Types as an Analogue for Estimating Potential Resource Size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terry, Rachel; Young, Katherine
Blind geothermal systems are becoming increasingly common as more geothermal fields are developed. Geothermal development is known to have high risk in the early stages of a project development because reservoir characteristics are relatively unknown until wells are drilled. Play types (or occurrence models) categorize potential geothermal fields into groups based on geologic characteristics. To aid in lowering exploration risk, these groups' reservoir characteristics can be used as analogues in new site exploration. The play type schemes used in this paper were Moeck and Beardsmore play types (Moeck et al. 2014) and Brophy occurrence models (Brophy et al. 2011). Operatingmore » geothermal fields throughout the world were classified based on their associated play type, and then reservoir characteristics data were catalogued. The distributions of these characteristics were plotted in histograms to develop probability density functions for each individual characteristic. The probability density functions can be used as input analogues in Monte Carlo estimations of resource potential for similar play types in early exploration phases. A spreadsheet model was created to estimate resource potential in undeveloped fields. The user can choose to input their own values for each reservoir characteristic or choose to use the probability distribution functions provided from the selected play type. This paper also addresses the United States Geological Survey's 1978 and 2008 assessment of geothermal resources by comparing their estimated values to reported values from post-site development. Information from the collected data was used in the comparison for thirty developed sites in the United States. No significant trends or suggestions for methodologies could be made by the comparison.« less
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
New tools for characterizing swarming systems: A comparison of minimal models
NASA Astrophysics Data System (ADS)
Huepe, Cristián; Aldana, Maximino
2008-05-01
We compare three simple models that reproduce qualitatively the emergent swarming behavior of bird flocks, fish schools, and other groups of self-propelled agents by using a new set of diagnosis tools related to the agents’ spatial distribution. Two of these correspond in fact to different implementations of the same model, which had been previously confused in the literature. All models appear to undergo a very similar order-to-disorder phase transition as the noise level is increased if we only compare the standard order parameter, which measures the degree of agent alignment. When considering our novel quantities, however, their properties are clearly distinguished, unveiling previously unreported qualitative characteristics that help determine which model best captures the main features of realistic swarms. Additionally, we analyze the agent clustering in space, finding that the distribution of cluster sizes is typically exponential at high noise, and approaches a power-law as the noise level is reduced. This trend is sometimes reversed at noise levels close to the phase transition, suggesting a non-trivial critical behavior that could be verified experimentally. Finally, we study a bi-stable regime that develops under certain conditions in large systems. By computing the probability distributions of our new quantities, we distinguish the properties of each of the coexisting metastable states. Our study suggests new experimental analyses that could be carried out to characterize real biological swarms.
Scanziani, Alessio; Singh, Kamaljit; Blunt, Martin J; Guadagnini, Alberto
2017-06-15
Multiphase flow in porous media is strongly influenced by the wettability of the system, which affects the arrangement of the interfaces of different phases residing in the pores. We present a method for estimating the effective contact angle, which quantifies the wettability and controls the local capillary pressure within the complex pore space of natural rock samples, based on the physical constraint of constant curvature of the interface between two fluids. This algorithm is able to extract a large number of measurements from a single rock core, resulting in a characteristic distribution of effective in situ contact angle for the system, that is modelled as a truncated Gaussian probability density distribution. The method is first validated on synthetic images, where the exact angle is known analytically; then the results obtained from measurements within the pore space of rock samples imaged at a resolution of a few microns are compared to direct manual assessment. Finally the method is applied to X-ray micro computed tomography (micro-CT) scans of two Ketton cores after waterflooding, that display water-wet and mixed-wet behaviour. The resulting distribution of in situ contact angles is characterized in terms of a mixture of truncated Gaussian densities. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Carpentier, David; Le Doussal, Pierre
2000-11-01
We study the two dimensional XY model with quenched random phases and its Coulomb gas formulation. A novel renormalization group (RG) method is developed which allows to study perturbatively the glassy low temperature XY phase and the transition at which frozen topological defects (vortices) proliferate. This RG approach is constructed both from the replicated Coulomb gas and, equivalently without the use of replicas, using the probability distribution of the local disorder (random defect core energy). By taking into account the fusion of environments (i.e., charge fusion in the replicated Coulomb gas) this distribution is shown to obey a Kolmogorov's type (KPP) non linear RG equation which admits traveling wave solutions and exhibits a freezing phenomenon analogous to glassy freezing in Derrida's random energy models. The resulting physical picture is that the distribution of local disorder becomes broad below a freezing temperature and that the transition is controlled by rare favorable regions for the defects, the density of which can be used as the new perturbative parameter. The determination of marginal directions at the disorder induced transition is shown to be related to the well studied front velocity selection problem in the KPP equation and the universality of the novel critical behaviour obtained here to the known universality of the corrections to the front velocity. Applications to other two dimensional problems are mentioned at the end.
Chang, Moo Been; Chi, Kai Hsien; Chang, Shu Hao; Yeh, Jhy Wei
2007-01-01
Partitioning of PCDD/F congeners between vapor/solid phases and removal and destruction efficiencies achieved with selective catalytic reduction (SCR) system for PCDD/Fs at an existing municipal waste incinerator (MWI) and metal smelting plant (MSP) in Taiwan are evaluated via stack sampling and analysis. The MWI investigated is equipped with electrostatic precipitators (EP, operating temperature: 230 degrees C), wet scrubbers (WS, operating temperature: 70 degrees C) and SCR (operating temperature: 220 degrees C) as major air pollution control devices (APCDs). PCDD/F concentration measured at stack gas of the MWI investigated is 0.728 ng-TEQ/Nm(3). The removal efficiency of WS+SCR system for PCDD/Fs reaches 93% in the MWI investigated. The MSP investigated is equipped with EP (operating temperature: 240 degrees C) and SCR (operating temperature: 290 degrees C) as APCDs. The flue gas sampling results also indicate that PCDD/F concentration treated with SCR is 1.35 ng-TEQ/Nm(3). The SCR system adopted in MSP can remove 52.3% PCDD/Fs from flue gases (SCR operating temperature: 290 degrees C, Gas flow rate: 660 kN m(3)/h). In addition, the distributions of PCDD/F congeners observed in the flue gases of the MWI and MSP investigated are significantly different. This study also indicates that the PCDD/F congeners measured in the flue gases of those two facilities are mostly distributed in vapor phase prior to the SCR system and shift to solid phase (vapor-phase PCDD/Fs are effectively decomposed) after being treated with catalyst. Besides, the results also indicate that with SCR highly chlorinated PCDD/F congeners can be transformed to lowly chlorinated PCDD/F congeners probably by dechlorination, while the removal efficiencies of vapor-phase PCDD/Fs increase with increasing chlorination.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Statistical plant set estimation using Schroeder-phased multisinusoidal input design
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.
NASA Astrophysics Data System (ADS)
Hamlaoui, Ikram; Bencheraiet, Reguia; Bensegueni, Rafik; Bencharif, Mustapha
2018-03-01
In this study, the antioxidant capacity of three chalcone derivatives was evaluated by DPPH free radical scavenging. Experimental data showed low antioxidant activity (IC50±SD) of these molecules in comparison with BHT. The mechanism of DPPH radical scavenging elucidated by means of density functional theory (DFT) calculations. The tested compounds and their corresponding radicals and anions were optimized using B3LYP functional with 6-31G (d,p) basis set in the gas phase. The C-PCM model was used to perform solvent medium calculations. On the basis of theoretical calculations, it was shown that HAT mechanism was predominant in the gas phase, whereas SET-PT and SPLET mechanisms were favored in the presence of the solvent. Moreover, the HOMO orbitals and spin density distribution was evaluated to predict the probable sites for free radical attack.
Statistical moments of quantum-walk dynamics reveal topological quantum transitions.
Cardano, Filippo; Maffei, Maria; Massa, Francesco; Piccirillo, Bruno; de Lisio, Corrado; De Filippis, Giulio; Cataudella, Vittorio; Santamato, Enrico; Marrucci, Lorenzo
2016-04-22
Many phenomena in solid-state physics can be understood in terms of their topological properties. Recently, controlled protocols of quantum walk (QW) are proving to be effective simulators of such phenomena. Here we report the realization of a photonic QW showing both the trivial and the non-trivial topologies associated with chiral symmetry in one-dimensional (1D) periodic systems. We find that the probability distribution moments of the walker position after many steps can be used as direct indicators of the topological quantum transition: while varying a control parameter that defines the system phase, these moments exhibit a slope discontinuity at the transition point. Numerical simulations strongly support the conjecture that these features are general of 1D topological systems. Extending this approach to higher dimensions, different topological classes, and other typologies of quantum phases may offer general instruments for investigating and experimentally detecting quantum transitions in such complex systems.
Statistical moments of quantum-walk dynamics reveal topological quantum transitions
Cardano, Filippo; Maffei, Maria; Massa, Francesco; Piccirillo, Bruno; de Lisio, Corrado; De Filippis, Giulio; Cataudella, Vittorio; Santamato, Enrico; Marrucci, Lorenzo
2016-01-01
Many phenomena in solid-state physics can be understood in terms of their topological properties. Recently, controlled protocols of quantum walk (QW) are proving to be effective simulators of such phenomena. Here we report the realization of a photonic QW showing both the trivial and the non-trivial topologies associated with chiral symmetry in one-dimensional (1D) periodic systems. We find that the probability distribution moments of the walker position after many steps can be used as direct indicators of the topological quantum transition: while varying a control parameter that defines the system phase, these moments exhibit a slope discontinuity at the transition point. Numerical simulations strongly support the conjecture that these features are general of 1D topological systems. Extending this approach to higher dimensions, different topological classes, and other typologies of quantum phases may offer general instruments for investigating and experimentally detecting quantum transitions in such complex systems. PMID:27102945
Stochastic and Boltzmann-like models for behavioral changes, and their relation to game theory
NASA Astrophysics Data System (ADS)
Helbing, Dirk
1993-03-01
In the last decade, stochastic models have shown to be very useful for quantitative modelling of social processes. Here, a configurational master equation for the description of behavioral changes by pair interactions of individuals is developed. Three kinds of social pair interactions are distinguished: Avoidance processes, compromising processes, and imitative processes. Computational results are presented for a special case of imitative processes: the competition of two equivalent strategies. They show a phase transition that describes the self-organization of a behavioral convention. This phase transition is further analyzed by examining the equations for the most probable behavioral distribution, which are Boltzmann-like equations. Special cases of Boltzmann-like equations do not obey the H-theorem and have oscillatory or even chaotic solutions. A suitable Taylor approximation leads to the so-called game dynamical equations (also known as selection-mutation equations in the theory of evolution).
Numerical detection of the Gardner transition in a mean-field glass former.
Charbonneau, Patrick; Jin, Yuliang; Parisi, Giorgio; Rainone, Corrado; Seoane, Beatriz; Zamponi, Francesco
2015-07-01
Recent theoretical advances predict the existence, deep into the glass phase, of a novel phase transition, the so-called Gardner transition. This transition is associated with the emergence of a complex free energy landscape composed of many marginally stable sub-basins within a glass metabasin. In this study, we explore several methods to detect numerically the Gardner transition in a simple structural glass former, the infinite-range Mari-Kurchan model. The transition point is robustly located from three independent approaches: (i) the divergence of the characteristic relaxation time, (ii) the divergence of the caging susceptibility, and (iii) the abnormal tail in the probability distribution function of cage order parameters. We show that the numerical results are fully consistent with the theoretical expectation. The methods we propose may also be generalized to more realistic numerical models as well as to experimental systems.
Modeling highway travel time distribution with conditional probability models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less
General formulation of long-range degree correlations in complex networks
NASA Astrophysics Data System (ADS)
Fujiki, Yuka; Takaguchi, Taro; Yakubo, Kousuke
2018-06-01
We provide a general framework for analyzing degree correlations between nodes separated by more than one step (i.e., beyond nearest neighbors) in complex networks. One joint and four conditional probability distributions are introduced to fully describe long-range degree correlations with respect to degrees k and k' of two nodes and shortest path length l between them. We present general relations among these probability distributions and clarify the relevance to nearest-neighbor degree correlations. Unlike nearest-neighbor correlations, some of these probability distributions are meaningful only in finite-size networks. Furthermore, as a baseline to determine the existence of intrinsic long-range degree correlations in a network other than inevitable correlations caused by the finite-size effect, the functional forms of these probability distributions for random networks are analytically evaluated within a mean-field approximation. The utility of our argument is demonstrated by applying it to real-world networks.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
Wang, Jihan; Yang, Kai
2014-07-01
An efficient operating room needs both little underutilised and overutilised time to achieve optimal cost efficiency. The probabilities of underrun and overrun of lists of cases can be estimated by a well defined duration distribution of the lists. To propose a method of predicting the probabilities of underrun and overrun of lists of cases using Type IV Pearson distribution to support case scheduling. Six years of data were collected. The first 5 years of data were used to fit distributions and estimate parameters. The data from the last year were used as testing data to validate the proposed methods. The percentiles of the duration distribution of lists of cases were calculated by Type IV Pearson distribution and t-distribution. Monte Carlo simulation was conducted to verify the accuracy of percentiles defined by the proposed methods. Operating rooms in John D. Dingell VA Medical Center, United States, from January 2005 to December 2011. Differences between the proportion of lists of cases that were completed within the percentiles of the proposed duration distribution of the lists and the corresponding percentiles. Compared with the t-distribution, the proposed new distribution is 8.31% (0.38) more accurate on average and 14.16% (0.19) more accurate in calculating the probabilities at the 10th and 90th percentiles of the distribution, which is a major concern of operating room schedulers. The absolute deviations between the percentiles defined by Type IV Pearson distribution and those from Monte Carlo simulation varied from 0.20 min (0.01) to 0.43 min (0.03). Operating room schedulers can rely on the most recent 10 cases with the same combination of surgeon and procedure(s) for distribution parameter estimation to plan lists of cases. Values are mean (SEM). The proposed Type IV Pearson distribution is more accurate than t-distribution to estimate the probabilities of underrun and overrun of lists of cases. However, as not all the individual case durations followed log-normal distributions, there was some deviation from the true duration distribution of the lists.
NASA Astrophysics Data System (ADS)
Weber, T.; Bartl, P.; Durst, J.; Haas, W.; Michel, T.; Ritter, A.; Anton, G.
2011-08-01
In the last decades, phase-contrast imaging using a Talbot-Lau grating interferometer is possible even with a low-brilliance X-ray source. With the potential of increasing the soft-tissue contrast, this method is on its way into medical imaging. For this purpose, the knowledge of the underlying physics of this technique is necessary.With this paper, we would like to contribute to the understanding of grating-based phase-contrast imaging by presenting results on measurements and simulations regarding the noise behaviour of the differential phases.These measurements were done using a microfocus X-ray tube with a hybrid, photon-counting, semiconductor Medipix2 detector. The additional simulations were performed by our in-house developed phase-contrast simulation tool “SPHINX”, combining both wave and particle contributions of the simulated photons.The results obtained by both of these methods show the same behaviour. Increasing the number of photons leads to a linear decrease of the standard deviation of the phase. The number of used phase steps has no influence on the standard deviation, if the total number of photons is held constant.Furthermore, the probability density function (pdf) of the reconstructed differential phases was analysed. It turned out that the so-called von Mises distribution is the physically correct pdf, which was also confirmed by measurements.This information advances the understanding of grating-based phase-contrast imaging and can be used to improve image quality.
A study of complex scaling transformation using the Wigner representation of wavefunctions.
Kaprálová-Ždánská, Petra Ruth
2011-05-28
The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics
Experimental study of the mutual influence of fibre Faraday elements in a spun-fibre interferometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gubin, V P; Morshnev, S K; Przhiyalkovsky, Ya V
2015-08-31
An all-spun-fibre linear reflective interferometer with two linked Faraday fibre coils is studied. It is found experimentally that there is mutual influence of Faraday fibre coils in this interferometer. It manifests itself as an additional phase shift of the interferometer response, which depends on the circular birefringence induced by the Faraday effect in both coils. In addition, the interferometer contrast and magneto-optical sensitivity of one of the coils change. A probable physical mechanism of the discovered effect is the distributed coupling of orthogonal polarised waves in the fibre medium, which is caused by fibre bend in the coil. (interferometry)
Nuclear Deformation at Finite Temperature
NASA Astrophysics Data System (ADS)
Alhassid, Y.; Gilbreth, C. N.; Bertsch, G. F.
2014-12-01
Deformation, a key concept in our understanding of heavy nuclei, is based on a mean-field description that breaks the rotational invariance of the nuclear many-body Hamiltonian. We present a method to analyze nuclear deformations at finite temperature in a framework that preserves rotational invariance. The auxiliary-field Monte Carlo method is used to generate a statistical ensemble and calculate the probability distribution associated with the quadrupole operator. Applying the technique to nuclei in the rare-earth region, we identify model-independent signatures of deformation and find that deformation effects persist to temperatures higher than the spherical-to-deformed shape phase-transition temperature of mean-field theory.
Density profiles of the exclusive queuing process
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Schadschneider, Andreas
2012-12-01
The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.
Roberson, A.M.; Andersen, D.E.; Kennedy, P.L.
2005-01-01
Broadcast surveys using conspecific calls are currently the most effective method for detecting northern goshawks (Accipiter gentilis) during the breeding season. These surveys typically use alarm calls during the nestling phase and juvenile food-begging calls during the fledgling-dependency phase. Because goshawks are most vocal during the courtship phase, we hypothesized that this phase would be an effective time to detect goshawks. Our objective was to improve current survey methodology by evaluating the probability of detecting goshawks at active nests in northern Minnesota in 3 breeding phases and at 4 broadcast distances and to determine the effective area surveyed per broadcast station. Unlike previous studies, we broadcast calls at only 1 distance per trial. This approach better quantifies (1) the relationship between distance and probability of detection, and (2) the effective area surveyed (EAS) per broadcast station. We conducted 99 broadcast trials at 14 active breeding areas. When pooled over all distances, detection rates were highest during the courtship (70%) and fledgling-dependency phases (68%). Detection rates were lowest during the nestling phase (28%), when there appeared to be higher variation in likelihood of detecting individuals. EAS per broadcast station was 39.8 ha during courtship and 24.8 ha during fledgling-dependency. Consequently, in northern Minnesota, broadcast stations may be spaced 712m and 562 m apart when conducting systematic surveys during courtship and fledgling-dependency, respectively. We could not calculate EAS for the nestling phase because probability of detection was not a simple function of distance from nest. Calculation of EAS could be applied to other areas where the probability of detection is a known function of distance.
A tool for simulating collision probabilities of animals with marine renewable energy devices.
Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise
2017-01-01
The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Quantum key distribution without the wavefunction
NASA Astrophysics Data System (ADS)
Niestegge, Gerd
A well-known feature of quantum mechanics is the secure exchange of secret bit strings which can then be used as keys to encrypt messages transmitted over any classical communication channel. It is demonstrated that this quantum key distribution allows a much more general and abstract access than commonly thought. The results include some generalizations of the Hilbert space version of quantum key distribution, but are based upon a general nonclassical extension of conditional probability. A special state-independent conditional probability is identified as origin of the superior security of quantum key distribution; this is a purely algebraic property of the quantum logic and represents the transition probability between the outcomes of two consecutive quantum measurements.
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
Belcher, Wayne R.; Sweetkind, Donald S.; Elliott, Peggy E.
2002-01-01
The use of geologic information such as lithology and rock properties is important to constrain conceptual and numerical hydrogeologic models. This geologic information is difficult to apply explicitly to numerical modeling and analyses because it tends to be qualitative rather than quantitative. This study uses a compilation of hydraulic-conductivity measurements to derive estimates of the probability distributions for several hydrogeologic units within the Death Valley regional ground-water flow system, a geologically and hydrologically complex region underlain by basin-fill sediments, volcanic, intrusive, sedimentary, and metamorphic rocks. Probability distributions of hydraulic conductivity for general rock types have been studied previously; however, this study provides more detailed definition of hydrogeologic units based on lithostratigraphy, lithology, alteration, and fracturing and compares the probability distributions to the aquifer test data. Results suggest that these probability distributions can be used for studies involving, for example, numerical flow modeling, recharge, evapotranspiration, and rainfall runoff. These probability distributions can be used for such studies involving the hydrogeologic units in the region, as well as for similar rock types elsewhere. Within the study area, fracturing appears to have the greatest influence on the hydraulic conductivity of carbonate bedrock hydrogeologic units. Similar to earlier studies, we find that alteration and welding in the Tertiary volcanic rocks greatly influence hydraulic conductivity. As alteration increases, hydraulic conductivity tends to decrease. Increasing degrees of welding appears to increase hydraulic conductivity because welding increases the brittleness of the volcanic rocks, thus increasing the amount of fracturing.
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Does Litter Size Variation Affect Models of Terrestrial Carnivore Extinction Risk and Management?
Devenish-Nelson, Eleanor S.; Stephens, Philip A.; Harris, Stephen; Soulsbury, Carl; Richards, Shane A.
2013-01-01
Background Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. Methodology/Principal Findings We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species – the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. Conclusion/Significance These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes. PMID:23469140
Does litter size variation affect models of terrestrial carnivore extinction risk and management?
Devenish-Nelson, Eleanor S; Stephens, Philip A; Harris, Stephen; Soulsbury, Carl; Richards, Shane A
2013-01-01
Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species - the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes.
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Miranowicz, Adam
2012-02-01
We study state-dependent quantum cloning that can outperform universal cloning (UC). This is possible by using some a priori information on a given quantum state to be cloned. Specifically, we propose a generalization and optical implementation of quantum optimal mirror phase-covariant cloning, which refers to optimal cloning of sets of qubits of known modulus of the expectation value of Pauli's Z operator. Our results can be applied to cloning of an arbitrary mirror-symmetric distribution of qubits on the Bloch sphere including in special cases UC and phase-covariant cloning. We show that the cloning is optimal by adapting our former optimality proof for axisymmetric cloning (Bartkiewicz and Miranowicz 2010 Phys. Rev. A 82 042330). Moreover, we propose an optical realization of the optimal mirror phase-covariant 1→2 cloning of a qubit, for which the mean probability of successful cloning varies from 1/6 to 1/3 depending on prior information on the set of qubits to be cloned. The qubits are represented by polarization states of photons generated by the type-I spontaneous parametric down-conversion. The scheme is based on the interference of two photons on an unbalanced polarization-dependent beam splitter with different splitting ratios for vertical and horizontal polarization components and the additional application of feedforward by means of Pockels cells. The experimental feasibility of the proposed setup is carefully studied including various kinds of imperfections and losses. Moreover, we briefly describe two possible cryptographic applications of the optimal mirror phase-covariant cloning corresponding to state discrimination (or estimation) and secure quantum teleportation.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Probabilistic analysis of preload in the abutment screw of a dental implant complex.
Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R
2008-09-01
Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.
Comparative analysis through probability distributions of a data set
NASA Astrophysics Data System (ADS)
Cristea, Gabriel; Constantinescu, Dan Mihai
2018-02-01
In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.
Turbulent transport with intermittency: Expectation of a scalar concentration.
Rast, Mark Peter; Pinton, Jean-François; Mininni, Pablo D
2016-04-01
Scalar transport by turbulent flows is best described in terms of Lagrangian parcel motions. Here we measure the Eulerian distance travel along Lagrangian trajectories in a simple point vortex flow to determine the probabilistic impulse response function for scalar transport in the absence of molecular diffusion. As expected, the mean squared Eulerian displacement scales ballistically at very short times and diffusively for very long times, with the displacement distribution at any given time approximating that of a random walk. However, significant deviations in the displacement distributions from Rayleigh are found. The probability of long distance transport is reduced over inertial range time scales due to spatial and temporal intermittency. This can be modeled as a series of trapping events with durations uniformly distributed below the Eulerian integral time scale. The probability of long distance transport is, on the other hand, enhanced beyond that of the random walk for both times shorter than the Lagrangian integral time and times longer than the Eulerian integral time. The very short-time enhancement reflects the underlying Lagrangian velocity distribution, while that at very long times results from the spatial and temporal variation of the flow at the largest scales. The probabilistic impulse response function, and with it the expectation value of the scalar concentration at any point in space and time, can be modeled using only the evolution of the lowest spatial wave number modes (the mean and the lowest harmonic) and an eddy based constrained random walk that captures the essential velocity phase relations associated with advection by vortex motions. Preliminary examination of Lagrangian tracers in three-dimensional homogeneous isotropic turbulence suggests that transport in that setting can be similarly modeled.
Unique Bond Breaking in Crystalline Phase Change Materials and the Quest for Metavalent Bonding.
Zhu, Min; Cojocaru-Mirédin, Oana; Mio, Antonio M; Keutgen, Jens; Küpers, Michael; Yu, Yuan; Cho, Ju-Young; Dronskowski, Richard; Wuttig, Matthias
2018-05-01
Laser-assisted field evaporation is studied in a large number of compounds, including amorphous and crystalline phase change materials employing atom probe tomography. This study reveals significant differences in field evaporation between amorphous and crystalline phase change materials. High probabilities for multiple events with more than a single ion detected per laser pulse are only found for crystalline phase change materials. The specifics of this unusual field evaporation are unlike any other mechanism shown previously to lead to high probabilities of multiple events. On the contrary, amorphous phase change materials as well as other covalently bonded compounds and metals possess much lower probabilities for multiple events. Hence, laser-assisted field evaporation in amorphous and crystalline phase change materials reveals striking differences in bond rupture. This is indicative for pronounced differences in bonding. These findings imply that the bonding mechanism in crystalline phase change materials differs substantially from conventional bonding mechanisms such as metallic, ionic, and covalent bonding. Instead, the data reported here confirm a recently developed conjecture, namely that metavalent bonding is a novel bonding mechanism besides those mentioned previously. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ikaite pseudomorphs in the Zaire deep-sea fan: An intermediate between calcite and porous calcite
NASA Astrophysics Data System (ADS)
Jansen, J. H. F.; Woensdregt, C. F.; Kooistra, M. J.; van der Gaast, S. J.
1987-03-01
Translucent brown aggregates of calcium-carbonate crystals have been found in cores from the Zaire deep-sea fan (west equatorial Africa). The aggregates are well preserved but very friable. Upon storage they become yellowish white and cloudy and release water. Chemical, mineralogical (XRD), petrographical, crystal-morphological, and stable-isotope data demonstrate that the crystals have passed through three phases: (1) an authigenic carbonate phase, probably calcium carbonate, which is represented by the external habit of the present crystals; (2) a translucent brown ikaite phase (CaCO3·6H2O), unstable at temperatures above 5 °C; and (3) a phase consisting of calcite microcrystals that are poorly cemented and form a porous mass within the crystal form of the morphologically unchanged first phase. The transformation from the first phase into ikaite was probably a kinetic replacement. The transformation from ikaite into the third phase occurred because of storage at room temperature. The presence of ikaite is indicative of a low-temperature, anaerobic, organic-carbon-rich marine environment. Ikaite is probably the precursor of a great number of porous calcite pseudomorphs, and possibly also of many marine authigenic microcrystalline carbonate nodules.
NASA Astrophysics Data System (ADS)
Hirono, Masahiko; Nojima, Toshio
This paper presents a new signaling architecture for radio-access control in wireless communications systems. Called THREP (for THREe-phase link set-up Process), it enables systems with low-cost configurations to provide tetherless access and wide-ranging mobility by using autonomous radio-link controls for fast cell searching and distributed call management. A signaling architecture generally consists of a radio-access part and a service-entity-access part. In THREP, the latter part is divided into two steps: preparing a communication channel, and sustaining it. Access control in THREP is thus composed of three separated parts, or protocol phases. The specifications of each phase are determined independently according to system requirements. In the proposed architecture, the first phase uses autonomous radio-link control because we want to construct low-power indoor wireless communications systems. Evaluation of channel usage efficiency and hand-over loss probability in the personal handy-phone system (PHS) shows that THREP makes the radio-access sub-system operations in a practical application model highly efficient, and the results of a field experiment show that THREP provides sufficient protection against severe fast CNR degradation in practical indoor propagation environments.
Controlling quantum interference in phase space with amplitude.
Xue, Yinghong; Li, Tingyu; Kasai, Katsuyuki; Okada-Shudo, Yoshiko; Watanabe, Masayoshi; Zhang, Yun
2017-05-23
We experimentally show a quantum interference in phase space by interrogating photon number probabilities (n = 2, 3, and 4) of a displaced squeezed state, which is generated by an optical parametric amplifier and whose displacement is controlled by amplitude of injected coherent light. It is found that the probabilities exhibit oscillations of interference effect depending upon the amplitude of the controlling light field. This phenomenon is attributed to quantum interference in phase space and indicates the capability of controlling quantum interference using amplitude. This remarkably contrasts with the oscillations of interference effects being usually controlled by relative phase in classical optics.
Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc
2016-01-01
The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596
NASA Astrophysics Data System (ADS)
Struck, Curtis; Appleton, Philip; Charmandaris, Vassilis; Reach, William; Smith, Beverly
2004-09-01
We propose to use Spitzer's unprecedented sensitivity and wide spatial and spectral evolution to study the distribution of star formation in a sample of colliding galaxies with a wide range of tidal and splash structures. Star forming environments like those in strong tidal spirals, and in extra-disk structures like tails were probably far more common in the early stages of galaxy evolution, and important contributors to the net star formation. Using the Spitzer data and data from other wavebands, we will compare the pattern of SF to maps of gas and dust density and phase distribution. With the help of dynamical modeling, we will relate these in turn to dynamical triggers, to better understand the trigger mechanisms. We expect our observations to complement both the SINGS archive and the archives produced by other GO programs, such as those looking at merger remnants or tidal dwarf formation.
Szabó, György; Szolnoki, Attila; Sznaider, Gustavo Ariel
2007-11-01
We study a spatial cyclic predator-prey model with an even number of species (for n=4, 6, and 8) that allows the formation of two defensive alliances consisting of the even and odd label species. The species are distributed on the sites of a square lattice. The evolution of spatial distribution is governed by iteration of two elementary processes on neighboring sites chosen randomly: if the sites are occupied by a predator-prey pair then the predator invades the prey's site; otherwise the species exchange their sites with a probability X . For low X values, a self-organizing pattern is maintained by cyclic invasions. If X exceeds a threshold value, then two types of domain grow up that are formed by the odd and even label species, respectively. Monte Carlo simulations indicate the blocking of this segregation process within a range of X for n=8.
NASA Technical Reports Server (NTRS)
Lau, K-M.; Wu, H-T.
2010-01-01
This study investigates the evolution of cloud and rainfall structures associated with Madden Julian oscillation (MJO) using Tropical Rainfall Measuring Mission (TRMM) data. Two complementary indices are used to define MJO phases. Joint probability distribution functions (PDFs) of cloud-top temperature and radar echo-top height are constructed for each of the eight MJO phases. The genesis stage of MJO convection over the western Pacific (phases 1 and 2) features a bottom-heavy PDF, characterized by abundant warm rain, low clouds, suppressed deep convection, and higher sea surface temperature (SST). As MJO convection develops (phases 3 and 4), a transition from the bottom-heavy to top-heavy PDF occurs. The latter is associated with the development of mixed-phase rain and middle-to-high clouds, coupled with rapid SST cooling. At the MJO convection peak (phase 5), a top-heavy PDF contributed by deep convection with mixed-phase and ice-phase rain and high echo-top heights (greater than 5 km) dominates. The decaying stage (phases 6 and 7) is characterized by suppressed SST, reduced total rain, increased contribution from stratiform rain, and increased nonraining high clouds. Phase 7, in particular, signals the beginning of a return to higher SST and increased warm rain. Phase 8 completes the MJO cycle, returning to a bottom-heavy PDF and SST conditions similar to phase 1. The structural changes in rain and clouds at different phases of MJO are consistent with corresponding changes in derived latent heating profiles, suggesting the importance of a diverse mix of warm, mixed-phase, and ice-phase rain associated with low-level, congestus, and high clouds in constituting the life cycle and the time scales of MJO.
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
Net present value probability distributions from decline curve reserves estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, D.E.; Huffman, C.H.; Thompson, R.S.
1995-12-31
This paper demonstrates how reserves probability distributions can be used to develop net present value (NPV) distributions. NPV probability distributions were developed from the rate and reserves distributions presented in SPE 28333. This real data study used practicing engineer`s evaluations of production histories. Two approaches were examined to quantify portfolio risk. The first approach, the NPV Relative Risk Plot, compares the mean NPV with the NPV relative risk ratio for the portfolio. The relative risk ratio is the NPV standard deviation (a) divided the mean ({mu}) NPV. The second approach, a Risk - Return Plot, is a plot of themore » {mu} discounted cash flow rate of return (DCFROR) versus the {sigma} for the DCFROR distribution. This plot provides a risk-return relationship for comparing various portfolios. These methods may help evaluate property acquisition and divestiture alternatives and assess the relative risk of a suite of wells or fields for bank loans.« less
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Physics of traffic gridlock in a city.
Kerner, Boris S
2011-10-01
Based on simulations of stochastic three-phase and two-phase traffic flow models, we reveal that at a signalized city intersection under small link inflow rates at which a vehicle queue developed during the red phase of the light signal dissolves fully during the green phase, i.e., no traffic gridlock should be expected, nevertheless, spontaneous traffic breakdown with subsequent city gridlock occurs with some probability after a random time delay. In most cases, this traffic breakdown is initiated by a phase transition from free flow to a synchronized flow occurring upstream of the queue at the light signal. The probability of traffic breakdown at the light signal is an increasing function of the link inflow rate and duration of the red phase of the light signal.
Properties of two-mode squeezed number states
NASA Technical Reports Server (NTRS)
Chizhov, Alexei V.; Murzakhmetov, B. K.
1994-01-01
Photon statistics and phase properties of two-mode squeezed number states are studied. It is shown that photon number distribution and Pegg-Barnett phase distribution for such states have similar (N + 1)-peak structure for nonzero value of the difference in the number of photons between modes. Exact analytical formulas for phase distributions based on different phase approaches are derived. The Pegg-Barnett phase distribution and the phase quasiprobability distribution associated with the Wigner function are close to each other, while the phase quasiprobability distribution associated with the Q function carries less phase information.
Development of STS/Centaur failure probabilities liftoff to Centaur separation
NASA Technical Reports Server (NTRS)
Hudson, J. M.
1982-01-01
The results of an analysis to determine STS/Centaur catastrophic vehicle response probabilities for the phases of vehicle flight from STS liftoff to Centaur separation from the Orbiter are presented. The analysis considers only category one component failure modes as contributors to the vehicle response mode probabilities. The relevant component failure modes are grouped into one of fourteen categories of potential vehicle behavior. By assigning failure rates to each component, for each of its failure modes, the STS/Centaur vehicle response probabilities in each phase of flight can be calculated. The results of this study will be used in a DOE analysis to ascertain the hazard from carrying a nuclear payload on the STS.
NASA Astrophysics Data System (ADS)
Mahanti, P.; Robinson, M. S.; Boyd, A. K.
2013-12-01
Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was computed over multiple scales. This slope analysis showed that local slope distributions are non-Gaussian for both crater walls and floors. Over larger baselines (~100 meters), crater wall slope probability distributions do approximate Gaussian distributions better, but have long distribution tails. Crater floor probability distributions however, were always asymmetric (for the baseline scales analyzed) and less affected by baseline scale variations. Accordingly, our results suggest that use of long tailed probability distributions (like Cauchy) and a baseline-dependant multi-scale model can be more effective in describing the slope statistics for lunar topography. Refrences: [1]Moore, H.(1971), JGR,75(11) [2]Marcus, A. H.(1969),JGR,74 (22).[3]R.J. Pike (1970),U.S. Geological Survey Working Paper [4]N. C. Costes, J. E. Farmer and E. B. George (1972),NASA Technical Report TR R-401 [5]M. N. Parker and G. L. Tyler(1973), Radio Science, 8(3),177-184 [6]Alekseev, V. A.et al (1968), Soviet Astronomy, Vol. 11, p.860 [7]Burns et al. (2012) Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXIX-B4, 483-488.[8]Smith et al. (2010) GRL 37, L18204, DOI: 10.1029/2010GL043751. [9]Wagner R., Robinson, M., Speyerer E., Mahanti, P., LPSC 2013, #2924.
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Vincent, Brett T.
1993-01-01
The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.
Probability and the changing shape of response distributions for orientation.
Anderson, Britt
2014-11-18
Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
Joint probabilities and quantum cognition
NASA Astrophysics Data System (ADS)
de Barros, J. Acacio
2012-12-01
In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.
Zhuang, Jiancang; Ogata, Yosihiko
2006-04-01
The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.
Combined loading criterial influence on structural performance
NASA Technical Reports Server (NTRS)
Kuchta, B. J.; Sealey, D. M.; Howell, L. J.
1972-01-01
An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.
Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet
2016-12-01
Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
NASA Astrophysics Data System (ADS)
Pereira, Guilherme Ferreira Lemos; Costa, Fanny Nascimento; Souza, José Antonio; Haddad, Paula Silvia; Ferreira, Fabio Furlan
2018-06-01
This article describes the synthesis of two superparamagnetic iron oxide nanoparticles (SPIONs) covered with different ligands - hydrophobic (oleic acid (OA)) and hydrophilic (tetraethyl ammonium (TEA)) - and the investigation of the effects of thermal treatments on the crystal structure of TEA-SPIONs or OA-SPIONs using X-ray powder diffraction data and parametric Rietveld refinements; we stablished non-crystallographic models to describe how the oxidation processes take place with increasing temperatures for the different systems. The morphological and magnetic properties revealed the nanoparticles have a mean diameter of ∼10 nm in the solid state and are superparamagnetic at room temperature. Magnetization measurements confirmed the superparamagnetic state for both systems and revealed smaller particle sizes and narrower size distribution for OA-SPIONs than for TEA-SPIONs. The thermomagnetic analyses show only the ferrimagnetic phase transition of magnetite for OA-SPIONs while in the TEA-SPIONs, besides the ferrimagnetic phase transition there is the appearance of an antiferromagnetic one disclosing the evolution of hematite phase probably on the surface of magnetite due to thermal cycles.
Statistical Mechanics of Labor Markets
NASA Astrophysics Data System (ADS)
Chen, He; Inoue, Jun-ichi
We introduce a probabilistic model of labor markets for university graduates, in particular, in Japan. To make a model of the market efficiently, we take into account several hypotheses. Namely, each company fixes the (business year independent) number of opening positions for newcomers. The ability of gathering newcomers depends on the result of job matching process in past business years. This fact means that the ability of the company is weaken if the company did not make their quota or the company gathered applicants too much over the quota. All university graduates who are looking for their jobs can access the public information about the ranking of companies. Assuming the above essential key points, we construct the local energy function of each company and describe the probability that an arbitrary company gets students at each business year by a Boltzmann-Gibbs distribution. We evaluate the relevant physical quantities such as the employment rate. We find that the system undergoes a sort of `phase transition' from the `good employment phase' to `poor employment phase' when one controls the degree of importance for the ranking.
Quantitative assessment of building fire risk to life safety.
Guanquan, Chu; Jinhua, Sun
2008-06-01
This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
On probability-possibility transformations
NASA Technical Reports Server (NTRS)
Klir, George J.; Parviz, Behzad
1992-01-01
Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
Newton/Poisson-Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.
1990-01-01
NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.
Successive phase transitions and kink solutions in Φ⁸, Φ¹⁰, and Φ¹² field theories
Khare, Avinash; Christov, Ivan C.; Saxena, Avadh
2014-08-27
We obtain exact solutions for kinks in Φ⁸, Φ¹⁰, and Φ¹² field theories with degenerate minima, which can describe a second-order phase transition followed by a first-order one, a succession of two first-order phase transitions and a second-order phase transition followed by two first-order phase transitions, respectively. Such phase transitions are known to occur in ferroelastic and ferroelectric crystals and in meson physics. In particular, we find that the higher-order field theories have kink solutions with algebraically-decaying tails and also asymmetric cases with mixed exponential-algebraic tail decay, unlike the lower-order Φ⁴ and Φ⁶ theories. Additionally, we construct distinct kinks withmore » equal energies in all three field theories considered, and we show the co-existence of up to three distinct kinks (for a Φ¹² potential with six degenerate minima). We also summarize phonon dispersion relations for these systems, showing that the higher-order field theories have specific cases in which only nonlinear phonons are allowed. For the Φ¹⁰ field theory, which is a quasi-exactly solvable (QES) model akin to Φ⁶, we are also able to obtain three analytical solutions for the classical free energy as well as the probability distribution function in the thermodynamic limit.« less
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. © 2012 Optical Society of America
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. © 2012 Society for Risk Analysis.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
2017-03-27
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu
In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less
NASA Astrophysics Data System (ADS)
Jenkins, Colleen; Jordan, Jay; Carlson, Jeff
2007-02-01
This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2009-01-01
The threat of lightning is a daily concern during the warm season in Florida. Research has revealed distinct spatial and temporal distributions of lightning occurrence that are strongly influenced by large-scale atmospheric flow regimes. Previously, the Applied Meteorology Unit (AMU) calculated the gridded lightning climatologies based on seven flow regimes over Florida for 1-, 3- and 6-hr intervals in 5-, 10-, 20-, and 30-NM diameter range rings around the Shuttle Landing Facility (SLF) and eight other airfields in the National Weather Service in Melbourne (NWS MLB) county warning area (CWA). In this update to the work, the AMU recalculated the lightning climatologies for using individual lightning strike data to improve the accuracy of the climatologies. The AMU included all data regardless of flow regime as one of the stratifications, added monthly stratifications, added three years of data to the period of record and used modified flow regimes based work from the AMU's Objective Lightning Probability Forecast Tool, Phase II. The AMU made changes so the 5- and 10-NM radius range rings are consistent with the aviation forecast requirements at NWS MLB, while the 20- and 30-NM radius range rings at the SLF assist the Spaceflight Meteorology Group in making forecasts for weather Flight Rule violations during Shuttle landings. The AMU also updated the graphical user interface with the new data.
Applications of finite-size scaling for atomic and non-equilibrium systems
NASA Astrophysics Data System (ADS)
Antillon, Edwin A.
We apply the theory of Finite-size scaling (FSS) to an atomic and a non-equilibrium system in order to extract critical parameters. In atomic systems, we look at the energy dependence on the binding charge near threshold between bound and free states, where we seek the critical nuclear charge for stability. We use different ab initio methods, such as Hartree-Fock, Density Functional Theory, and exact formulations implemented numerically with the finite-element method (FEM). Using Finite-size scaling formalism, where in this case the size of the system is related to the number of elements used in the basis expansion of the wavefunction, we predict critical parameters in the large basis limit. Results prove to be in good agreement with previous Slater-basis set calculations and demonstrate that this combined approach provides a promising first-principles approach to describe quantum phase transitions for materials and extended systems. In the second part we look at non-equilibrium one-dimensional model known as the raise and peel model describing a growing surface which grows locally and has non-local desorption. For a specific values of adsorption ( ua) and desorption (ud) the model shows interesting features. At ua = ud, the model is described by a conformal field theory (with conformal charge c = 0) and its stationary probability can be mapped to the ground state of a quantum chain and can also be related a two dimensional statistical model. For ua ≥ ud, the model shows a scale invariant phase in the avalanche distribution. In this work we study the surface dynamics by looking at avalanche distributions using FSS formalism and explore the effect of changing the boundary conditions of the model. The model shows the same universality for the cases with and with our the wall for an odd number of tiles removed, but we find a new exponent in the presence of a wall for an even number of avalanches released. We provide new conjecture for the probability distribution of avalanches with a wall obtained by using exact diagonalization of small lattices and Monte-Carlo simulations.
Gas/particle partitioning and particle size distribution of PCDD/Fs and PCBs in urban ambient air.
Barbas, B; de la Torre, A; Sanz, P; Navarro, I; Artíñano, B; Martínez, M A
2018-05-15
Urban ambient air samples, including gas-phase (PUF), total suspended particulates (TSP), PM 10 , PM 2.5 and PM 1 airborne particle fractions were collected to evaluate gas-particle partitioning and size particle distribution of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) and polychlorinated biphenyls (PCBs). Clausius-Clapeyron equation, regressions of logKp vs logP L and logK OA, and human respiratory risk assessment were used to evaluate local or long-distance transport sources, gas-particle partitioning sorption mechanisms, and implications for health. Total ambient air levels (gas phase+particulate phase) of TPCBs and TPCDD/Fs, were 437 and 0.07pgm -3 (median), respectively. Levels of PCDD/F in the gas phase (0.004-0.14pgm -3 , range) were significantly (p<0.05) lower than those found in the particulate phase (0.02-0.34pgm -3 ). The concentrations of PCDD/Fs were higher in winter. In contrast, PCBs were mainly associated to the gas phase, and displayed maximum levels in warm seasons, probably due to an increase in evaporation rates, supported by significant and strong positive dependence on temperature observed for several congeners. No significant differences in PCDD/Fs and PCBs concentrations were detected between the different particle size fractions considered (TSP, PM 10 , PM 2.5 and PM 1 ), reflecting that these chemicals are mainly bounded to PM 1 . The toxic content of samples was also evaluated. Total toxicity (PUF+TSP) attributable to dl-PCBs (13.4fg-TEQ 05 m -3 , median) was higher than those reported for PCDD/Fs (6.26fg-TEQ 05 m -3 ). The inhalation risk assessment concluded that the inhalation of PCDD/Fs and dl-PCBs pose a low cancer risk in the studied area. Copyright © 2017 Elsevier B.V. All rights reserved.
Probabilistic Assessment of Cancer Risk from Solar Particle Events
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee Y.; Cucinotta, Francis A.
For long duration missions outside of the protection of the Earth's magnetic field, space radi-ation presents significant health risks including cancer mortality. Space radiation consists of solar particle events (SPEs), comprised largely of medium energy protons (less than several hundred MeV); and galactic cosmic ray (GCR), which include high energy protons and heavy ions. While the frequency distribution of SPEs depends strongly upon the phase within the solar activity cycle, the individual SPE occurrences themselves are random in nature. We es-timated the probability of SPE occurrence using a non-homogeneous Poisson model to fit the historical database of proton measurements. Distributions of particle fluences of SPEs for a specified mission period were simulated ranging from its 5th to 95th percentile to assess the cancer risk distribution. Spectral variability of SPEs was also examined, because the detailed energy spectra of protons are important especially at high energy levels for assessing the cancer risk associated with energetic particles for large events. We estimated the overall cumulative probability of GCR environment for a specified mission period using a solar modulation model for the temporal characterization of the GCR environment represented by the deceleration po-tential (φ). Probabilistic assessment of cancer fatal risk was calculated for various periods of lunar and Mars missions. This probabilistic approach to risk assessment from space radiation is in support of mission design and operational planning for future manned space exploration missions. In future work, this probabilistic approach to the space radiation will be combined with a probabilistic approach to the radiobiological factors that contribute to the uncertainties in projecting cancer risks.
Probabilistic Assessment of Cancer Risk from Solar Particle Events
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Cucinotta, Francis A.
2010-01-01
For long duration missions outside of the protection of the Earth s magnetic field, space radiation presents significant health risks including cancer mortality. Space radiation consists of solar particle events (SPEs), comprised largely of medium energy protons (less than several hundred MeV); and galactic cosmic ray (GCR), which include high energy protons and heavy ions. While the frequency distribution of SPEs depends strongly upon the phase within the solar activity cycle, the individual SPE occurrences themselves are random in nature. We estimated the probability of SPE occurrence using a non-homogeneous Poisson model to fit the historical database of proton measurements. Distributions of particle fluences of SPEs for a specified mission period were simulated ranging from its 5 th to 95th percentile to assess the cancer risk distribution. Spectral variability of SPEs was also examined, because the detailed energy spectra of protons are important especially at high energy levels for assessing the cancer risk associated with energetic particles for large events. We estimated the overall cumulative probability of GCR environment for a specified mission period using a solar modulation model for the temporal characterization of the GCR environment represented by the deceleration potential (^). Probabilistic assessment of cancer fatal risk was calculated for various periods of lunar and Mars missions. This probabilistic approach to risk assessment from space radiation is in support of mission design and operational planning for future manned space exploration missions. In future work, this probabilistic approach to the space radiation will be combined with a probabilistic approach to the radiobiological factors that contribute to the uncertainties in projecting cancer risks.
p-adic stochastic hidden variable model
NASA Astrophysics Data System (ADS)
Khrennikov, Andrew
1998-03-01
We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.
A Bayesian pick-the-winner design in a randomized phase II clinical trial.
Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E
2017-10-24
Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
Geometric evolution of complex networks with degree correlations
NASA Astrophysics Data System (ADS)
Murphy, Charles; Allard, Antoine; Laurence, Edward; St-Onge, Guillaume; Dubé, Louis J.
2018-03-01
We present a general class of geometric network growth mechanisms by homogeneous attachment in which the links created at a given time t are distributed homogeneously between a new node and the existing nodes selected uniformly. This is achieved by creating links between nodes uniformly distributed in a homogeneous metric space according to a Fermi-Dirac connection probability with inverse temperature β and general time-dependent chemical potential μ (t ) . The chemical potential limits the spatial extent of newly created links. Using a hidden variable framework, we obtain an analytical expression for the degree sequence and show that μ (t ) can be fixed to yield any given degree distributions, including a scale-free degree distribution. Additionally, we find that depending on the order in which nodes appear in the network—its history—the degree-degree correlations can be tuned to be assortative or disassortative. The effect of the geometry on the structure is investigated through the average clustering coefficient 〈c 〉 . In the thermodynamic limit, we identify a phase transition between a random regime where 〈c 〉→0 when β <βc and a geometric regime where 〈c 〉>0 when β >βc .
NASA Astrophysics Data System (ADS)
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
NASA Astrophysics Data System (ADS)
Lee, Jaeha; Tsutsui, Izumi
2017-05-01
We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Universal laws of human society's income distribution
NASA Astrophysics Data System (ADS)
Tao, Yong
2015-10-01
General equilibrium equations in economics play the same role with many-body Newtonian equations in physics. Accordingly, each solution of the general equilibrium equations can be regarded as a possible microstate of the economic system. Since Arrow's Impossibility Theorem and Rawls' principle of social fairness will provide a powerful support for the hypothesis of equal probability, then the principle of maximum entropy is available in a just and equilibrium economy so that an income distribution will occur spontaneously (with the largest probability). Remarkably, some scholars have observed such an income distribution in some democratic countries, e.g. USA. This result implies that the hypothesis of equal probability may be only suitable for some "fair" systems (economic or physical systems). From this meaning, the non-equilibrium systems may be "unfair" so that the hypothesis of equal probability is unavailable.
Polynomial chaos representation of databases on manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2017-04-15
Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less
Gravitational lensing, time delay, and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1992-01-01
The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration. © 2013 Elsevier B.V. All rights reserved.
3D radiation belt diffusion model results using new empirical models of whistler chorus and hiss
NASA Astrophysics Data System (ADS)
Cunningham, G.; Chen, Y.; Henderson, M. G.; Reeves, G. D.; Tu, W.
2012-12-01
3D diffusion codes model the energization, radial transport, and pitch angle scattering due to wave-particle interactions. Diffusion codes are powerful but are limited by the lack of knowledge of the spatial & temporal distribution of waves that drive the interactions for a specific event. We present results from the 3D DREAM model using diffusion coefficients driven by new, activity-dependent, statistical models of chorus and hiss waves. Most 3D codes parameterize the diffusion coefficients or wave amplitudes as functions of magnetic activity indices like Kp, AE, or Dst. These functional representations produce the average value of the wave intensities for a given level of magnetic activity; however, the variability of the wave population at a given activity level is lost with such a representation. Our 3D code makes use of the full sample distributions contained in a set of empirical wave databases (one database for each wave type, including plasmaspheric hiss, lower and upper hand chorus) that were recently produced by our team using CRRES and THEMIS observations. The wave databases store the full probability distribution of observed wave intensity binned by AE, MLT, MLAT and L*. In this presentation, we show results that make use of the wave intensity sample probability distributions for lower-band and upper-band chorus by sampling the distributions stochastically during a representative CRRES-era storm. The sampling of the wave intensity probability distributions produces a collection of possible evolutions of the phase space density, which quantifies the uncertainty in the model predictions caused by the uncertainty of the chorus wave amplitudes for a specific event. A significant issue is the determination of an appropriate model for the spatio-temporal correlations of the wave intensities, since the diffusion coefficients are computed as spatio-temporal averages of the waves over MLT, MLAT and L*. The spatiotemporal correlations cannot be inferred from the wave databases. In this study we use a temporal correlation of ~1 hour for the sampled wave intensities that is informed by the observed autocorrelation in the AE index, a spatial correlation length of ~100 km in the two directions perpendicular to the magnetic field, and a spatial correlation length of 5000 km in the direction parallel to the magnetic field, according to the work of Santolik et al (2003), who used multi-spacecraft measurements from Cluster to quantify the correlation length scales for equatorial chorus . We find that, despite the small correlation length scale for chorus, there remains significant variability in the model outcomes driven by variability in the chorus wave intensities.
Theoretical size distribution of fossil taxa: analysis of a null model
Reed, William J; Hughes, Barry D
2007-01-01
Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Probability distributions of continuous measurement results for conditioned quantum evolution
NASA Astrophysics Data System (ADS)
Franquet, A.; Nazarov, Yuli V.
2017-02-01
We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.
Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.
2016-01-01
The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733
NASA Astrophysics Data System (ADS)
Colombier, M.; Gurioli, L.; Druitt, T. H.; Shea, T.; Boivin, P.; Miallier, D.; Cluzel, N.
2017-02-01
Textural parameters such as density, porosity, pore connectivity, permeability, and vesicle size distributions of vesiculated and dense pyroclasts from the 9.4-ka eruption of Kilian Volcano, were quantified to constrain conduit and eruptive processes. The eruption generated a sequence of five vertical explosions of decreasing intensity, producing pyroclastic density currents and tephra fallout. The initial and final phases of the eruption correspond to the fragmentation of a degassed plug, as suggested by the increase of dense juvenile clasts (bimodal density distributions) as well as non-juvenile clasts, resulting from the reaming of a crater. In contrast, the intermediate eruptive phases were the results of more open-conduit conditions (unimodal density distributions, decreases in dense juvenile pyroclasts, and non-juvenile clasts). Vesicles within the pyroclasts are almost fully connected; however, there are a wide range of permeabilities, especially for the dense juvenile clasts. Textural analysis of the juvenile clasts reveals two vesiculation events: (1) an early nucleation event at low decompression rates during slow magma ascent producing a population of large bubbles (>1 mm) and (2) a syn-explosive nucleation event, followed by growth and coalescence of small bubbles controlled by high decompression rates immediately prior to or during explosive fragmentation. The similarities in pyroclast textures between the Kilian explosions and those at Soufrière Hills Volcano on Montserrat, in 1997, imply that eruptive processes in the two systems were rather similar and probably common to vulcanian eruptions in general.
Tsallis non-extensive statistics and solar wind plasma complexity
NASA Astrophysics Data System (ADS)
Pavlos, G. P.; Iliopoulos, A. C.; Zastenker, G. N.; Zelenyi, L. M.; Karakatsanis, L. P.; Riazantseva, M. O.; Xenakis, M. N.; Pavlos, E. G.
2015-03-01
This article presents novel results revealing non-equilibrium phase transition processes in the solar wind plasma during a strong shock event, which took place on 26th September 2011. Solar wind plasma is a typical case of stochastic spatiotemporal distribution of physical state variables such as force fields (B → , E →) and matter fields (particle and current densities or bulk plasma distributions). This study shows clearly the non-extensive and non-Gaussian character of the solar wind plasma and the existence of multi-scale strong correlations from the microscopic to the macroscopic level. It also underlines the inefficiency of classical magneto-hydro-dynamic (MHD) or plasma statistical theories, based on the classical central limit theorem (CLT), to explain the complexity of the solar wind dynamics, since these theories include smooth and differentiable spatial-temporal functions (MHD theory) or Gaussian statistics (Boltzmann-Maxwell statistical mechanics). On the contrary, the results of this study indicate the presence of non-Gaussian non-extensive statistics with heavy tails probability distribution functions, which are related to the q-extension of CLT. Finally, the results of this study can be understood in the framework of modern theoretical concepts such as non-extensive statistical mechanics (Tsallis, 2009), fractal topology (Zelenyi and Milovanov, 2004), turbulence theory (Frisch, 1996), strange dynamics (Zaslavsky, 2002), percolation theory (Milovanov, 1997), anomalous diffusion theory and anomalous transport theory (Milovanov, 2001), fractional dynamics (Tarasov, 2013) and non-equilibrium phase transition theory (Chang, 1992).
Wang, Zhiping; Chen, Jinyu; Yu, Benli
2017-02-20
We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.
External noise-induced transitions in a current-biased Josephson junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Qiongwei; Xue, Changfeng, E-mail: cfxue@163.com; Tang, Jiashi
We investigate noise-induced transitions in a current-biased and weakly damped Josephson junction in the presence of multiplicative noise. By using the stochastic averaging procedure, the averaged amplitude equation describing dynamic evolution near a constant phase difference is derived. Numerical results show that a stochastic Hopf bifurcation between an absorbing and an oscillatory state occurs. This means the external controllable noise triggers a transition into the non-zero junction voltage state. With the increase of noise intensity, the stationary probability distribution peak shifts and is characterised by increased width and reduced height. And the different transition rates are shown for large andmore » small bias currents.« less
Are groups of galaxies virialized systems?
NASA Technical Reports Server (NTRS)
Diaferio, Antonaldo; Ramella, Massimo; Geller, Margaret J.; Ferrari, Attilio
1993-01-01
Groups are systems of galaxies with crossing times t(cr) much smaller than the Hubble time. Most of them have t(cr) less than 0.1/H0. The usual interpretation is that they are in virial equilibrium. We compare the data of the group catalog selected from the CfA redshift survey extension with different N-body models. We show that the distributions of kinematic and dynamical quantities of the groups in the CfA catalog can be reproduced by a single collapsing group observed along different line of sights. This result shows that (1) projection effects dominate the statistics of these systems, and (2) observed groups of galaxies are probably still in the collapse phase.
NASA Astrophysics Data System (ADS)
Polotto, Franciele; Drigo Filho, Elso; Chahine, Jorge; Oliveira, Ronaldo Junio de
2018-03-01
This work developed analytical methods to explore the kinetics of the time-dependent probability distributions over thermodynamic free energy profiles of protein folding and compared the results with simulation. The Fokker-Planck equation is mapped onto a Schrödinger-type equation due to the well-known solutions of the latter. Through a semi-analytical description, the supersymmetric quantum mechanics formalism is invoked and the time-dependent probability distributions are obtained with numerical calculations by using the variational method. A coarse-grained structure-based model of the two-state protein Tm CSP was simulated at a Cα level of resolution and the thermodynamics and kinetics were fully characterized. Analytical solutions from non-equilibrium conditions were obtained with the simulated double-well free energy potential and kinetic folding times were calculated. It was found that analytical folding time as a function of temperature agrees, quantitatively, with simulations and experiments from the literature of Tm CSP having the well-known 'U' shape of the Chevron Plots. The simple analytical model developed in this study has a potential to be used by theoreticians and experimentalists willing to explore, quantitatively, rates and the kinetic behavior of their system by informing the thermally activated barrier. The theory developed describes a stochastic process and, therefore, can be applied to a variety of biological as well as condensed-phase two-state systems.
Evolution of a Modified Binomial Random Graph by Agglomeration
NASA Astrophysics Data System (ADS)
Kang, Mihyun; Pachon, Angelica; Rodríguez, Pablo M.
2018-02-01
In the classical Erdős-Rényi random graph G( n, p) there are n vertices and each of the possible edges is independently present with probability p. The random graph G( n, p) is homogeneous in the sense that all vertices have the same characteristics. On the other hand, numerous real-world networks are inhomogeneous in this respect. Such an inhomogeneity of vertices may influence the connection probability between pairs of vertices. The purpose of this paper is to propose a new inhomogeneous random graph model which is obtained in a constructive way from the Erdős-Rényi random graph G( n, p). Given a configuration of n vertices arranged in N subsets of vertices (we call each subset a super-vertex), we define a random graph with N super-vertices by letting two super-vertices be connected if and only if there is at least one edge between them in G( n, p). Our main result concerns the threshold for connectedness. We also analyze the phase transition for the emergence of the giant component and the degree distribution. Even though our model begins with G( n, p), it assumes the existence of some community structure encoded in the configuration. Furthermore, under certain conditions it exhibits a power law degree distribution. Both properties are important for real-world applications.
Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin
2016-09-20
The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.
NASA Astrophysics Data System (ADS)
Fulton, J. W.; Bjerklie, D. M.; Jones, J. W.; Minear, J. T.
2015-12-01
Measuring streamflow, developing, and maintaining rating curves at new streamgaging stations is both time-consuming and problematic. Hydro 21 was an initiative by the U.S. Geological Survey to provide vision and leadership to identify and evaluate new technologies and methods that had the potential to change the way in which streamgaging is conducted. Since 2014, additional trials have been conducted to evaluate some of the methods promoted by the Hydro 21 Committee. Emerging technologies such as continuous-wave radars and computationally-efficient methods such as the Probability Concept require significantly less field time, promote real-time velocity and streamflow measurements, and apply to unsteady flow conditions such as looped ratings and unsteady-flood flows. Portable and fixed-mount radars have advanced beyond the development phase, are cost effective, and readily available in the marketplace. The Probability Concept is based on an alternative velocity-distribution equation developed by C.-L. Chiu, who pioneered the concept. By measuring the surface-water velocity and correcting for environmental influences such as wind drift, radars offer a reliable alternative for measuring and computing real-time streamflow for a variety of hydraulic conditions. If successful, these tools may allow us to establish ratings more efficiently, assess unsteady flow conditions, and report real-time streamflow at new streamgaging stations.
A discussion on the origin of quantum probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel
We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Hybrid Approaches and Industrial Applications of Pattern Recognition,
1980-10-01
emphasized that the probability distribution in (9) is correct only under the assumption that P( wIx ) is known exactly. In practice this assumption will...sufficient precision. The alternative would be to take the probability distribution of estimates of P( wix ) into account in the analysis. However, from the
Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.
ERIC Educational Resources Information Center
Egghe, Leo; Rousseau, Ronald
1995-01-01
Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M
2018-03-13
Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.
Identification, prediction, and mitigation of sinkhole hazards in evaporite karst areas
Gutierrez, F.; Cooper, A.H.; Johnson, K.S.
2008-01-01
Sinkholes usually have a higher probability of occurrence and a greater genetic diversity in evaporite terrains than in carbonate karst areas. This is because evaporites have a higher solubility and, commonly, a lower mechanical strength. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well understood in a few areas. To deal with these hazards, a phased approach is needed for sinkhole identification, investigation, prediction, and mitigation. Identification techniques include field surveys and geomorphological mapping combined with accounts from local people and historical sources. Detailed sinkhole maps can be constructed from sequential historical maps, recent topographical maps, and digital elevation models (DEMs) complemented with building-damage surveying, remote sensing, and high-resolution geodetic surveys. On a more detailed level, information from exposed paleosubsidence features (paleokarst), speleological explorations, geophysical investigations, trenching, dating techniques, and boreholes may help in investigating dissolution and subsidence features. Information on the hydrogeological pathways including caves, springs, and swallow holes are particularly important especially when corroborated by tracer tests. These diverse data sources make a valuable database-the karst inventory. From this dataset, sinkhole susceptibility zonations (relative probability) may be produced based on the spatial distribution of the features and good knowledge of the local geology. Sinkhole distribution can be investigated by spatial distribution analysis techniques including studies of preferential elongation, alignment, and nearest neighbor analysis. More objective susceptibility models may be obtained by analyzing the statistical relationships between the known sinkholes and the conditioning factors. Chronological information on sinkhole formation is required to estimate the probability of occurrence of sinkholes (number of sinkholes/km2 year). Such spatial and temporal predictions, frequently derived from limited records and based on the assumption that past sinkhole activity may be extrapolated to the future, are non-corroborated hypotheses. Validation methods allow us to assess the predictive capability of the susceptibility maps and to transform them into probability maps. Avoiding the most hazardous areas by preventive planning is the safest strategy for development in sinkhole-prone areas. Corrective measures could be applied to reduce the dissolution activity and subsidence processes. A more practical solution for safe development is to reduce the vulnerability of the structures by using subsidence-proof designs. ?? 2007 Springer-Verlag.
The statistics of Pearce element diagrams and the Chayes closure problem
NASA Astrophysics Data System (ADS)
Nicholls, J.
1988-05-01
Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
Event patterns extracted from top quark-related spectra in proton-proton collisions at 8 TeV
NASA Astrophysics Data System (ADS)
Chen, Ya-Hui; Liu, Fu-Hu; Lacey, Roy A.
2018-02-01
We analyze the transverse momentum (p T) and rapidity (y) spectra of top quark pairs, hadronic top quarks, and top quarks produced in proton-proton (pp) collisions at center-of-mass energy \\sqrt{s}=8 TeV. For {p}{{T}} spectra, we use the superposition of the inverse power-law suggested by the QCD (quantum chromodynamics) calculus and the Erlang distribution resulting from a multisource thermal model. For y spectra, we use the two-component Gaussian function resulting from the revised Landau hydrodynamic model. The modelling results are in agreement with the experimental data measured at the detector level, in the fiducial phase-space, and in the full phase-space by the ATLAS Collaboration at the Large Hadron Collider (LHC). Based on the parameter values extracted from p T and y spectra, the event patterns in three-dimensional velocity (βx -βy -βz ), momentum (px -py -pz ), and rapidity (y 1-y 2-y) spaces are obtained, and the probability distributions of these components are also obtained. Supported by National Natural Science Foundation of China (11575103, 11747319), the Shanxi Provincial Natural Science Foundation (201701D121005), the Fund for Shanxi “1331 Project” Key Subjects Construction and the US DOE (DE-FG02-87ER40331.A008)
NASA Astrophysics Data System (ADS)
Masnadi, N.; Duncan, J. H.
2013-11-01
The non-linear response of a water surface to a slow-moving pressure distribution is studied experimentally using a vertically oriented carriage-mounted air-jet tube that is set to translate over the water surface in a long tank. The free surface deformation pattern is measured with a full-field refraction-based method that utilizes a vertically oriented digital movie camera (under the tank) and a random dot pattern (above the water surface). At towing speeds just below the minimum phase speed of gravity-capillary waves (cmin ~ 23 cm/s), an unsteady V-shaped pattern is formed behind the pressure source. Localized depressions are generated near the source and propagate in pairs along the two arms of the V-shaped pattern. These depressions are eventually shed from the tips of the pattern at a frequency of about 1 Hz. It is found that the shape and phase speeds of the first depressions shed in each run are quantitatively similar to the freely-propagating gravity-capillary lumps from potential flow calculations. In the experiments, the amplitudes of the depressions decrease by approximately 60 percent while travelling 12 wavelengths. The depressions shed later in each run behave in a less consistent manner, probably due to their interaction with neighboring depressions.
Results of the 1989 experiment with a polarimetric multifrequency SAR
NASA Astrophysics Data System (ADS)
Groot, J. S.; Vandenbroek, A. C.
1992-03-01
In August 1989, a single day measurement campaign was conducted with two airborne imaging radars, the Dutch X band Side Looking Airborne Radar (SLAR) and the P/L/C band polarimetric Synthetic Aperture Radar (SAR). Test sites were the Flevopolder and the Veluwe (agricultural and forested areas, respectively) in the Netherlands. Ground activities included the deployment of several calibration devices like different sized trihedrals, dihedrals, and PARC's (active calibrators). An extensive experiment description is presented. Relevant details of the two radar systems and the calibration devices used are given. The calibration of the SLAR and SAR data is described. The calibrated data is used to investigate its potential for the classification of agricultural crop types. Several quantities are extracted from the data, which are used to do this. Examples are the copolarization phase difference distribution, the degree of polarization, and copolarized signatures. It appears to be quite possible to discriminate bare soil fields from vegetated fields using polarimetric quantities like the HH/VV (Horizontal Horizontal/Vertical Vertical) phase difference distribution and the degree of polarization. However, discrimination between fields with different crop types is much more difficult probably due to interference of features not related to the crop type, such as soil moisture, soil roughness, lodging, etc.
Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2015-12-01
The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.
Villalobos-Hernández, J R; Müller-Goymann, C C
2006-09-28
Carnauba wax is partially composed of cinnamates. The rational combination of cinnamates and titanium dioxide has shown a synergistic effect to improve the sun protection factor (SPF) of cosmetic preparations. However, the mechanism of this interaction has not been fully understood. In this study, an ethanolic extract of the carnauba wax and an ethanolic solution of a typical cinnamate derivative, ethylcinnamate, were prepared and their UV absorption and SPF either alone or in the presence of titanium dioxide were compared. The titanium dioxide crystals and the cinnamates solutions were also distributed into a matrix composed of saturated fatty acids to emulate the structure of the crystallized carnauba wax. SPF, differential scanning calorimetry (DSC) and X-ray studies of these matrices were performed. Additionally, carnauba wax nanosuspensions containing titanium dioxide either in the lipid phase or in the aqueous phase were prepared to evaluate their SPFs and their physical structure. Strong UV absorption was observed in diluted suspensions of titanium dioxide after the addition of cinnamates. The saturated fatty acid matrices probably favored the adsorption of the cinnamates at the surface of titanium dioxide crystals, which was reflected by an increase in the SPF. No modification of the crystal structure of the fatty acid matrices was observed after the addition of cinnamates or titanium dioxide. The distribution of the titanium dioxide inside the lipid phase of the nanosuspensions was more effective to reach higher SPFs than that at the aqueous phase. The close contact between the carnauba wax and the titanium dioxide crystals after the high-pressure homogenization process was confirmed by transmission electron microscopy (TEM).
Multiple-Event Seismic Location Using the Markov-Chain Monte Carlo Technique
NASA Astrophysics Data System (ADS)
Myers, S. C.; Johannesson, G.; Hanley, W.
2005-12-01
We develop a new multiple-event location algorithm (MCMCloc) that utilizes the Markov-Chain Monte Carlo (MCMC) method. Unlike most inverse methods, the MCMC approach produces a suite of solutions, each of which is consistent with observations and prior estimates of data and model uncertainties. Model parameters in MCMCloc consist of event hypocenters, and travel-time predictions. Data are arrival time measurements and phase assignments. Posteriori estimates of event locations, path corrections, pick errors, and phase assignments are made through analysis of the posteriori suite of acceptable solutions. Prior uncertainty estimates include correlations between travel-time predictions, correlations between measurement errors, the probability of misidentifying one phase for another, and the probability of spurious data. Inclusion of prior constraints on location accuracy allows direct utilization of ground-truth locations or well-constrained location parameters (e.g. from InSAR) that aid in the accuracy of the solution. Implementation of a correlation structure for travel-time predictions allows MCMCloc to operate over arbitrarily large geographic areas. Transition in behavior between a multiple-event locator for tightly clustered events and a single-event locator for solitary events is controlled by the spatial correlation of travel-time predictions. We test the MCMC locator on a regional data set of Nevada Test Site nuclear explosions. Event locations and origin times are known for these events, allowing us to test the features of MCMCloc using a high-quality ground truth data set. Preliminary tests suggest that MCMCloc provides excellent relative locations, often outperforming traditional multiple-event location algorithms, and excellent absolute locations are attained when constraints from one or more ground truth event are included. When phase assignments are switched, we find that MCMCloc properly corrects the error when predicted arrival times are separated by several seconds. In cases where the predicted arrival times are within the combined uncertainty of prediction and measurement errors, MCMCloc determines the probability of one or the other phase assignment and propagates this uncertainty into all model parameters. We find that MCMCloc is a promising method for simultaneously locating large, geographically distributed data sets. Because we incorporate prior knowledge on many parameters, MCMCloc is ideal for combining trusted data with data of unknown reliability. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-ABS-215048
Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.
2017-07-17
The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.
NASA Astrophysics Data System (ADS)
Taner, M. U.; Ray, P.; Brown, C.
2016-12-01
Hydroclimatic nonstationarity due to climate change poses challenges for long-term water infrastructure planning in river basin systems. While designing strategies that are flexible or adaptive hold intuitive appeal, development of well-performing strategies requires rigorous quantitative analysis that address uncertainties directly while making the best use of scientific information on the expected evolution of future climate. Multi-stage robust optimization (RO) offers a potentially effective and efficient technique for addressing the problem of staged basin-level planning under climate change, however the necessity of assigning probabilities to future climate states or scenarios is an obstacle to implementation, given that methods to reliably assign probabilities to future climate states are not well developed. We present a method that overcomes this challenge by creating a bottom-up RO-based framework that decreases the dependency on probability distributions of future climate and rather employs them after optimization to aid selection amongst competing alternatives. The iterative process yields a vector of `optimal' decision pathways each under the associated set of probabilistic assumptions. In the final phase, the vector of optimal decision pathways is evaluated to identify the solutions that are least sensitive to the scenario probabilities and are most-likely conditional on the climate information. The framework is illustrated for the planning of new dam and hydro-agricultural expansions projects in the Niger River Basin over a 45-year planning period from 2015 to 2060.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.