Sample records for probability distribution obtained

  1. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  2. Investigation of Dielectric Breakdown Characteristics for Double-break Vacuum Interrupter and Dielectric Breakdown Probability Distribution in Vacuum Interrupter

    NASA Astrophysics Data System (ADS)

    Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi

    To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.

  3. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  4. Score distributions of gapped multiple sequence alignments down to the low-probability tail

    NASA Astrophysics Data System (ADS)

    Fieth, Pascal; Hartmann, Alexander K.

    2016-08-01

    Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.

  5. On probability-possibility transformations

    NASA Technical Reports Server (NTRS)

    Klir, George J.; Parviz, Behzad

    1992-01-01

    Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.

  6. A discussion on the origin of quantum probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel

    We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less

  7. Option volatility and the acceleration Lagrangian

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang

    2014-01-01

    This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.

  8. Joint probabilities and quantum cognition

    NASA Astrophysics Data System (ADS)

    de Barros, J. Acacio

    2012-12-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  9. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks.

    PubMed

    Zhuang, Jiancang; Ogata, Yosihiko

    2006-04-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.

  10. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  11. Probabilistic analysis of preload in the abutment screw of a dental implant complex.

    PubMed

    Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R

    2008-09-01

    Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.

  12. Quantitative assessment of building fire risk to life safety.

    PubMed

    Guanquan, Chu; Jinhua, Sun

    2008-06-01

    This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.

  13. Correlation between discrete probability and reaction front propagation rate in heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Naine, Tarun Bharath; Gundawar, Manoj Kumar

    2017-09-01

    We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.

  14. Newton/Poisson-Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.

    1990-01-01

    NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.

  15. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  16. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  17. KINETICS OF LOW SOURCE REACTOR STARTUPS. PART II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.

    1962-06-01

    A computational technique is described for computation of the probability distribution of power level for a low source reactor startup. The technique uses a mathematical model, for the time-dependent probability distribution of neutron and precursor concentration, having finite neutron lifetime, one group of delayed neutron precursors, and no spatial dependence. Results obtained by the technique are given. (auth)

  18. General Exact Solution to the Problem of the Probability Density for Sums of Random Variables

    NASA Astrophysics Data System (ADS)

    Tribelsky, Michael I.

    2002-07-01

    The exact explicit expression for the probability density pN(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of pN(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.

  19. General exact solution to the problem of the probability density for sums of random variables.

    PubMed

    Tribelsky, Michael I

    2002-08-12

    The exact explicit expression for the probability density p(N)(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of p(N)(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.

  20. On the motion of classical three-body system with consideration of quantum fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gevorkyan, A. S., E-mail: g-ashot@sci.am

    2017-03-15

    We obtained the systemof stochastic differential equations which describes the classicalmotion of the three-body system under influence of quantum fluctuations. Using SDEs, for the joint probability distribution of the total momentum of bodies system were obtained the partial differential equation of the second order. It is shown, that the equation for the probability distribution is solved jointly by classical equations, which in turn are responsible for the topological peculiarities of tubes of quantum currents, transitions between asymptotic channels and, respectively for arising of quantum chaos.

  1. The Probability of Obtaining Two Statistically Different Test Scores as a Test Index

    ERIC Educational Resources Information Center

    Muller, Jorg M.

    2006-01-01

    A new test index is defined as the probability of obtaining two randomly selected test scores (PDTS) as statistically different. After giving a concept definition of the test index, two simulation studies are presented. The first analyzes the influence of the distribution of test scores, test reliability, and sample size on PDTS within classical…

  2. The Spiral of Life

    NASA Astrophysics Data System (ADS)

    Cajiao Vélez, F.; Kamiński, J. Z.; Krajewska, K.

    2018-04-01

    High-energy photoionization driven by short and circularly-polarized laser pulses is studied in the framework of the relativistic strong-field approximation. The saddle-point analysis of the integrals defining the probability amplitude is used to determine the general properties of the probability distributions. Additionally, an approximate solution to the saddle-point equation is derived. This leads to the concept of the three-dimensional spiral of life in momentum space, around which the ionization probability distribution is maximum. We demonstrate that such spiral is also obtained from a classical treatment.

  3. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  4. A tool for simulating collision probabilities of animals with marine renewable energy devices.

    PubMed

    Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise

    2017-01-01

    The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.

  5. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  6. What Can Quantum Optics Say about Computational Complexity Theory?

    NASA Astrophysics Data System (ADS)

    Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.

    2015-02-01

    Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.

  7. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  8. Optimal minimal measurements of mixed states

    NASA Astrophysics Data System (ADS)

    Vidal, G.; Latorre, J. I.; Pascual, P.; Tarrach, R.

    1999-07-01

    The optimal and minimal measuring strategy is obtained for a two-state system prepared in a mixed state with a probability given by any isotropic a priori distribution. We explicitly construct the specific optimal and minimal generalized measurements, which turn out to be independent of the a priori probability distribution, obtaining the best guesses for the unknown state as well as a closed expression for the maximal mean-average fidelity. We do this for up to three copies of the unknown state in a way that leads to the generalization to any number of copies, which we then present and prove.

  9. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  10. The Finite-Size Scaling Relation for the Order-Parameter Probability Distribution of the Six-Dimensional Ising Model

    NASA Astrophysics Data System (ADS)

    Merdan, Ziya; Karakuş, Özlem

    2016-11-01

    The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.

  11. Probabilistic Reasoning for Robustness in Automated Planning

    NASA Technical Reports Server (NTRS)

    Schaffer, Steven; Clement, Bradley; Chien, Steve

    2007-01-01

    A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.

  12. The Poisson Random Process. Applications of Probability Theory to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 340.

    ERIC Educational Resources Information Center

    Wilde, Carroll O.

    The Poisson probability distribution is seen to provide a mathematical model from which useful information can be obtained in practical applications. The distribution and some situations to which it applies are studied, and ways to find answers to practical questions are noted. The unit includes exercises and a model exam, and provides answers to…

  13. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  14. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    NASA Astrophysics Data System (ADS)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  15. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  16. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  17. Measures for a multidimensional multiverse

    NASA Astrophysics Data System (ADS)

    Chung, Hyeyoun

    2015-04-01

    We explore the phenomenological implications of generalizing the causal patch and fat geodesic measures to a multidimensional multiverse, where the vacua can have differing numbers of large dimensions. We consider a simple model in which the vacua are nucleated from a D -dimensional parent spacetime through dynamical compactification of the extra dimensions, and compute the geometric contribution to the probability distribution of observations within the multiverse for each measure. We then study how the shape of this probability distribution depends on the time scales for the existence of observers, for vacuum domination, and for curvature domination (tobs,tΛ , and tc, respectively.) In this work we restrict ourselves to bubbles with positive cosmological constant, Λ . We find that in the case of the causal patch cutoff, when the bubble universes have p +1 large spatial dimensions with p ≥2 , the shape of the probability distribution is such that we obtain the coincidence of time scales tobs˜tΛ˜tc . Moreover, the size of the cosmological constant is related to the size of the landscape. However, the exact shape of the probability distribution is different in the case p =2 , compared to p ≥3 . In the case of the fat geodesic measure, the result is even more robust: the shape of the probability distribution is the same for all p ≥2 , and we once again obtain the coincidence tobs˜tΛ˜tc . These results require only very mild conditions on the prior probability of the distribution of vacua in the landscape. Our work shows that the observed double coincidence of time scales is a robust prediction even when the multiverse is generalized to be multidimensional; that this coincidence is not a consequence of our particular Universe being (3 +1 )-dimensional; and that this observable cannot be used to preferentially select one measure over another in a multidimensional multiverse.

  18. Unifying distribution functions: some lesser known distributions.

    PubMed

    Moya-Cessa, J R; Moya-Cessa, H; Berriel-Valdos, L R; Aguilar-Loreto, O; Barberis-Blostein, P

    2008-08-01

    We show that there is a way to unify distribution functions that describe simultaneously a classical signal in space and (spatial) frequency and position and momentum for a quantum system. Probably the most well known of them is the Wigner distribution function. We show how to unify functions of the Cohen class, Rihaczek's complex energy function, and Husimi and Glauber-Sudarshan distribution functions. We do this by showing how they may be obtained from ordered forms of creation and annihilation operators and by obtaining them in terms of expectation values in different eigenbases.

  19. Generalized ensemble theory with non-extensive statistics

    NASA Astrophysics Data System (ADS)

    Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke

    2017-12-01

    The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.

  20. Vacuum quantum stress tensor fluctuations: A diagonalization approach

    NASA Astrophysics Data System (ADS)

    Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.

    2018-01-01

    Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.

  1. Methods to elicit probability distributions from experts: a systematic review of reported practice in health technology assessment.

    PubMed

    Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken

    2013-11-01

    Elicitation is a technique that can be used to obtain probability distribution from experts about unknown quantities. We conducted a methodology review of reports where probability distributions had been elicited from experts to be used in model-based health technology assessments. Databases including MEDLINE, EMBASE and the CRD database were searched from inception to April 2013. Reference lists were checked and citation mapping was also used. Studies describing their approach to the elicitation of probability distributions were included. Data was abstracted on pre-defined aspects of the elicitation technique. Reports were critically appraised on their consideration of the validity, reliability and feasibility of the elicitation exercise. Fourteen articles were included. Across these studies, the most marked features were heterogeneity in elicitation approach and failure to report key aspects of the elicitation method. The most frequently used approaches to elicitation were the histogram technique and the bisection method. Only three papers explicitly considered the validity, reliability and feasibility of the elicitation exercises. Judged by the studies identified in the review, reports of expert elicitation are insufficient in detail and this impacts on the perceived usability of expert-elicited probability distributions. In this context, the wider credibility of elicitation will only be improved by better reporting and greater standardisation of approach. Until then, the advantage of eliciting probability distributions from experts may be lost.

  2. A testable model of earthquake probability based on changes in mean event size

    NASA Astrophysics Data System (ADS)

    Imoto, Masajiro

    2003-02-01

    We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.

  3. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  4. Density distribution function of a self-gravitating isothermal compressible turbulent fluid in the context of molecular clouds ensembles

    NASA Astrophysics Data System (ADS)

    Donkov, Sava; Stefanov, Ivan Z.

    2018-03-01

    We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.

  5. Probability elicitation to inform early health economic evaluations of new medical technologies: a case study in heart failure disease management.

    PubMed

    Cao, Qi; Postmus, Douwe; Hillege, Hans L; Buskens, Erik

    2013-06-01

    Early estimates of the commercial headroom available to a new medical device can assist producers of health technology in making appropriate product investment decisions. The purpose of this study was to illustrate how this quantity can be captured probabilistically by combining probability elicitation with early health economic modeling. The technology considered was a novel point-of-care testing device in heart failure disease management. First, we developed a continuous-time Markov model to represent the patients' disease progression under the current care setting. Next, we identified the model parameters that are likely to change after the introduction of the new device and interviewed three cardiologists to capture the probability distributions of these parameters. Finally, we obtained the probability distribution of the commercial headroom available per measurement by propagating the uncertainty in the model inputs to uncertainty in modeled outcomes. For a willingness-to-pay value of €10,000 per life-year, the median headroom available per measurement was €1.64 (interquartile range €0.05-€3.16) when the measurement frequency was assumed to be daily. In the subsequently conducted sensitivity analysis, this median value increased to a maximum of €57.70 for different combinations of the willingness-to-pay threshold and the measurement frequency. Probability elicitation can successfully be combined with early health economic modeling to obtain the probability distribution of the headroom available to a new medical technology. Subsequently feeding this distribution into a product investment evaluation method enables stakeholders to make more informed decisions regarding to which markets a currently available product prototype should be targeted. Copyright © 2013. Published by Elsevier Inc.

  6. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  7. Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Salimi, S.; Jafarizadeh, M. A.

    2009-06-01

    In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.

  8. The estimation of lower refractivity uncertainty from radar sea clutter using the Bayesian—MCMC method

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng

    2013-02-01

    The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.

  9. Sampling--how big a sample?

    PubMed

    Aitken, C G

    1999-07-01

    It is thought that, in a consignment of discrete units, a certain proportion of the units contain illegal material. A sample of the consignment is to be inspected. Various methods for the determination of the sample size are compared. The consignment will be considered as a random sample from some super-population of units, a certain proportion of which contain drugs. For large consignments, a probability distribution, known as the beta distribution, for the proportion of the consignment which contains illegal material is obtained. This distribution is based on prior beliefs about the proportion. Under certain specific conditions the beta distribution gives the same numerical results as an approach based on the binomial distribution. The binomial distribution provides a probability for the number of units in a sample which contain illegal material, conditional on knowing the proportion of the consignment which contains illegal material. This is in contrast to the beta distribution which provides probabilities for the proportion of a consignment which contains illegal material, conditional on knowing the number of units in the sample which contain illegal material. The interpretation when the beta distribution is used is much more intuitively satisfactory. It is also much more flexible in its ability to cater for prior beliefs which may vary given the different circumstances of different crimes. For small consignments, a distribution, known as the beta-binomial distribution, for the number of units in the consignment which are found to contain illegal material, is obtained, based on prior beliefs about the number of units in the consignment which are thought to contain illegal material. As with the beta and binomial distributions for large samples, it is shown that, in certain specific conditions, the beta-binomial and hypergeometric distributions give the same numerical results. However, the beta-binomial distribution, as with the beta distribution, has a more intuitively satisfactory interpretation and greater flexibility. The beta and the beta-binomial distributions provide methods for the determination of the minimum sample size to be taken from a consignment in order to satisfy a certain criterion. The criterion requires the specification of a proportion and a probability.

  10. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  11. Probability Analysis of the Wave-Slamming Pressure Values of the Horizontal Deck with Elastic Support

    NASA Astrophysics Data System (ADS)

    Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao

    2018-06-01

    This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.

  12. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  13. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  14. A methodology for the transfer of probabilities between accident severity categories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlow, J. D.; Neuhauser, K. S.

    A methodology has been developed which allows the accident probabilities associated with one accident-severity category scheme to be transferred to another severity category scheme. The methodology requires that the schemes use a common set of parameters to define the categories. The transfer of accident probabilities is based on the relationships between probability of occurrence and each of the parameters used to define the categories. Because of the lack of historical data describing accident environments in engineering terms, these relationships may be difficult to obtain directly for some parameters. Numerical models or experienced judgement are often needed to obtain the relationships.more » These relationships, even if they are not exact, allow the accident probability associated with any severity category to be distributed within that category in a manner consistent with accident experience, which in turn will allow the accident probability to be appropriately transferred to a different category scheme.« less

  15. Idealized models of the joint probability distribution of wind speeds

    NASA Astrophysics Data System (ADS)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  16. A Statistical Framework for Microbial Source Attribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velsko, S P; Allen, J E; Cunningham, C T

    2009-04-28

    This report presents a general approach to inferring transmission and source relationships among microbial isolates from their genetic sequences. The outbreak transmission graph (also called the transmission tree or transmission network) is the fundamental structure which determines the statistical distributions relevant to source attribution. The nodes of this graph are infected individuals or aggregated sub-populations of individuals in which transmitted bacteria or viruses undergo clonal expansion, leading to a genetically heterogeneous population. Each edge of the graph represents a transmission event in which one or a small number of bacteria or virions infects another node thus increasing the size ofmore » the transmission network. Recombination and re-assortment events originate in nodes which are common to two distinct networks. In order to calculate the probability that one node was infected by another, given the observed genetic sequences of microbial isolates sampled from them, we require two fundamental probability distributions. The first is the probability of obtaining the observed mutational differences between two isolates given that they are separated by M steps in a transmission network. The second is the probability that two nodes sampled randomly from an outbreak transmission network are separated by M transmission events. We show how these distributions can be obtained from the genetic sequences of isolates obtained by sampling from past outbreaks combined with data from contact tracing studies. Realistic examples are drawn from the SARS outbreak of 2003, the FMDV outbreak in Great Britain in 2001, and HIV transmission cases. The likelihood estimators derived in this report, and the underlying probability distribution functions required to calculate them possess certain compelling general properties in the context of microbial forensics. These include the ability to quantify the significance of a sequence 'match' or 'mismatch' between two isolates; the ability to capture non-intuitive effects of network structure on inferential power, including the 'small world' effect; the insensitivity of inferences to uncertainties in the underlying distributions; and the concept of rescaling, i.e. ability to collapse sub-networks into single nodes and examine transmission inferences on the rescaled network.« less

  17. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  18. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  19. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  20. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  1. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  2. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  3. Kinetics of Slow Neutrons in a Time-of-flight Spectrometer. II. Probability of Transmission Across a Rotating Slit and Distribution after the Flight of Neutrons with Velocity Spectrum F (v); CINETICA DEI NEUTRONI LENTI IN UNO SPETTROMETRO A TEMPO DI VOLO. II. PROBABILITA DI TRANSMISSIONE ATTRAVERSO UNA FENDITURA RUOTANTE E DISTRIBUZIONE DOPO IL VOLO DI NEUTRONI CON SPETTRO DI VELOCITA F (V)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsequerra, M.; Pauli, G.

    1958-12-01

    On the basis of the results obtained in Part I (CNC-1), expressions are derived for the transmission probability through a revolving curved slit for neutrons having a velocity distribution f(v), the distribution shown by the neutrons after the flight, and the uncertainty in the energy of neutrons detected in an infinitesimal time interval. (auth)

  4. In-beam Fission Study at JAEA

    NASA Astrophysics Data System (ADS)

    Nishio, Katsuhisa

    2013-12-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and quasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.

  5. In-beam fission study for Heavy Element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, Katsuhisa

    2013-12-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.

  6. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  7. Probability distribution of extreme share returns in Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  8. A probable probability distribution of a series nonequilibrium states in a simple system out of equilibrium

    NASA Astrophysics Data System (ADS)

    Gao, Haixia; Li, Ting; Xiao, Changming

    2016-05-01

    When a simple system is in its nonequilibrium state, it will shift to its equilibrium state. Obviously, in this process, there are a series of nonequilibrium states. With the assistance of Bayesian statistics and hyperensemble, a probable probability distribution of these nonequilibrium states can be determined by maximizing the hyperensemble entropy. It is known that the largest probability is the equilibrium state, and the far a nonequilibrium state is away from the equilibrium one, the smaller the probability will be, and the same conclusion can also be obtained in the multi-state space. Furthermore, if the probability stands for the relative time the corresponding nonequilibrium state can stay, then the velocity of a nonequilibrium state returning back to its equilibrium can also be determined through the reciprocal of the derivative of this probability. It tells us that the far away the state from the equilibrium is, the faster the returning velocity will be; if the system is near to its equilibrium state, the velocity will tend to be smaller and smaller, and finally tends to 0 when it gets the equilibrium state.

  9. Probability distributions for Markov chain based quantum walks

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  10. Shallow slip amplification and enhanced tsunami hazard unravelled by dynamic simulations of mega-thrust earthquakes

    PubMed Central

    Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.

    2016-01-01

    The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733

  11. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  12. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  13. Biomechanical Tolerance of Calcaneal Fractures

    PubMed Central

    Yoganandan, Narayan; Pintar, Frank A.; Gennarelli, Thomas A.; Seipel, Robert; Marks, Richard

    1999-01-01

    Biomechanical studies have been conducted in the past to understand the mechanisms of injury to the foot-ankle complex. However, statistically based tolerance criteria for calcaneal complex injuries are lacking. Consequently, this research was designed to derive a probability distribution that represents human calcaneal tolerance under impact loading such as those encountered in vehicular collisions. Information for deriving the distribution was obtained by experiments on unembalmed human cadaver lower extremities. Briefly, the protocol included the following. The knee joint was disarticulated such that the entire lower extremity distal to the knee joint remained intact. The proximal tibia was fixed in polymethylmethacrylate. The specimens were aligned and impact loading was applied using mini-sled pendulum equipment. The pendulum impactor dynamically loaded the plantar aspect of the foot once. Following the test, specimens were palpated and radiographs in multiple planes were obtained. Injuries were classified into no fracture, and extra-and intra-articular fractures of the calcaneus. There were 14 cases of no injury and 12 cases of calcaneal fracture. The fracture forces (mean: 7802 N) were significantly different (p<0.01) from the forces in the no injury (mean: 4144 N) group. The probability of calcaneal fracture determined using logistic regression indicated that a force of 6.2 kN corresponds to 50 percent probability of calcaneal fracture. The derived probability distribution is useful in the design of dummies and vehicular surfaces.

  14. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  15. Beating the odds: The poisson distribution of all input cells during limiting dilution grossly underestimates whether a cell line is clonally-derived or not.

    PubMed

    Zhou, Yizhou; Shaw, David; Lam, Cynthia; Tsukuda, Joni; Yim, Mandy; Tang, Danming; Louie, Salina; Laird, Michael W; Snedecor, Brad; Misaghi, Shahram

    2017-09-23

    Establishing that a cell line was derived from a single cell progenitor and defined as clonally-derived for the production of clinical and commercial therapeutic protein drugs has been the subject of increased emphasis in cell line development (CLD). Several regulatory agencies have expressed that the prospective probability of clonality for CHO cell lines is assumed to follow the Poisson distribution based on the input cell count. The probability of obtaining monoclonal progenitors based on the Poisson distribution of all cells suggests that one round of limiting dilution may not be sufficient to assure the resulting cell lines are clonally-derived. We experimentally analyzed clonal derivatives originating from single cell cloning (SCC) via one round of limiting dilution, following our standard legacy cell line development practice. Two cell populations with stably integrated DNA spacers were mixed and subjected to SCC via limiting dilution. Cells were cultured in the presence of selection agent, screened, and ranked based on product titer. Post-SCC, the growing cell lines were screened by PCR analysis for the presence of identifying spacers. We observed that the percentage of nonclonal populations was below 9%, which is considerably lower than the determined probability based on the Poisson distribution of all cells. These results were further confirmed using fluorescence imaging of clonal derivatives originating from SCC via limiting dilution of mixed cell populations expressing GFP or RFP. Our results demonstrate that in the presence of selection agent, the Poisson distribution of all cells clearly underestimates the probability of obtaining clonally-derived cell lines. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 2017. © 2017 American Institute of Chemical Engineers.

  16. Computing exact bundle compliance control charts via probability generating functions.

    PubMed

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  17. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  18. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  19. Use of Bayesian Inference in Crystallographic Structure Refinement via Full Diffraction Profile Analysis

    PubMed Central

    Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.

    2016-01-01

    A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221

  20. System statistical reliability model and analysis

    NASA Technical Reports Server (NTRS)

    Lekach, V. S.; Rood, H.

    1973-01-01

    A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.

  1. Conditional Probabilities and Collapse in Quantum Measurements

    NASA Astrophysics Data System (ADS)

    Laura, Roberto; Vanni, Leonardo

    2008-09-01

    We show that including both the system and the apparatus in the quantum description of the measurement process, and using the concept of conditional probabilities, it is possible to deduce the statistical operator of the system after a measurement with a given result, which gives the probability distribution for all possible consecutive measurements on the system. This statistical operator, representing the state of the system after the first measurement, is in general not the same that would be obtained using the postulate of collapse.

  2. Misinterpretation of statistical distance in security of quantum key distribution shown by simulation

    NASA Astrophysics Data System (ADS)

    Iwakoshi, Takehisa; Hirota, Osamu

    2014-10-01

    This study will test an interpretation in quantum key distribution (QKD) that trace distance between the distributed quantum state and the ideal mixed state is a maximum failure probability of the protocol. Around 2004, this interpretation was proposed and standardized to satisfy both of the key uniformity in the context of universal composability and operational meaning of the failure probability of the key extraction. However, this proposal has not been verified concretely yet for many years while H. P. Yuen and O. Hirota have thrown doubt on this interpretation since 2009. To ascertain this interpretation, a physical random number generator was employed to evaluate key uniformity in QKD. In this way, we calculated statistical distance which correspond to trace distance in quantum theory after a quantum measurement is done, then we compared it with the failure probability whether universal composability was obtained. As a result, the degree of statistical distance of the probability distribution of the physical random numbers and the ideal uniformity was very large. It is also explained why trace distance is not suitable to guarantee the security in QKD from the view point of quantum binary decision theory.

  3. Coalescence computations for large samples drawn from populations of time-varying sizes

    PubMed Central

    Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek

    2017-01-01

    We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404

  4. Non-gaussian statistics of pencil beam surveys

    NASA Technical Reports Server (NTRS)

    Amendola, Luca

    1994-01-01

    We study the effect of the non-Gaussian clustering of galaxies on the statistics of pencil beam surveys. We derive the probability from the power spectrum peaks by means of Edgeworth expansion and find that the higher order moments of the galaxy distribution play a dominant role. The probability of obtaining the 128 Mpc/h periodicity found in pencil beam surveys is raised by more than one order of magnitude, up to 1%. Further data are needed to decide if non-Gaussian distribution alone is sufficient to explain the 128 Mpc/h periodicity, or if extra large-scale power is necessary.

  5. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  6. Assessment of Group Preferences and Group Uncertainty for Decision Making

    DTIC Science & Technology

    1976-06-01

    the individ- uals. decision making , group judgments should be preferred to individual judgments if obtaining group judgments costs more. -26- -YI IV... decision making group . IV. A. 3. Aggregation using conjugate distribution. Arvther procedure for combining indivi(jai probability judgments into a group...statisticized group group decision making group judgment subjective probability Delphi method expected utility nominal group 20. ABSTRACT (Continue on

  7. The propagator of stochastic electrodynamics

    NASA Astrophysics Data System (ADS)

    Cavalleri, G.

    1981-01-01

    The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.

  8. Using hidden Markov models to align multiple sequences.

    PubMed

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  9. Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu N.

    2006-02-01

    In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

  10. A stochastic Markov chain model to describe lung cancer growth and metastasis.

    PubMed

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter

    2012-01-01

    A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.

  11. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  12. No-signaling quantum key distribution: solution by linear programming

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan

    2015-02-01

    We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.

  13. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  14. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  15. Quasi-probabilities in conditioned quantum measurement and a geometric/statistical interpretation of Aharonov's weak value

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2017-05-01

    We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.

  16. Velocity distributions among colliding asteroids

    NASA Technical Reports Server (NTRS)

    Bottke, William F., Jr.; Nolan, Michael C.; Greenberg, Richard; Kolvoord, Robert A.

    1994-01-01

    The probability distribution for impact velocities between two given asteroids is wide, non-Gaussian, and often contains spikes according to our new method of analysis in which each possible orbital geometry for collision is weighted according to its probability. An average value would give a good representation only if the distribution were smooth and narrow. Therefore, the complete velocity distribution we obtain for various asteroid populations differs significantly from published histograms of average velocities. For all pairs among the 682 asteroids in the main-belt with D greater than 50 km, we find that our computed velocity distribution is much wider than previously computed histograms of average velocities. In this case, the most probable impact velocity is approximately 4.4 km/sec, compared with the mean impact velocity of 5.3 km/sec. For cases of a single asteroid (e.g., Gaspra or Ida) relative to an impacting population, the distribution we find yields lower velocities than previously reported by others. The width of these velocity distributions implies that mean impact velocities must be used with caution when calculating asteroid collisional lifetimes or crater-size distributions. Since the most probable impact velocities are lower than the mean, disruption events may occur less frequently than previously estimated. However, this disruption rate may be balanced somewhat by an apparent increase in the frequency of high-velocity impacts between asteroids. These results have implications for issues such as asteroidal disruption rates, the amount/type of impact ejecta available for meteoritical delivery to the Earth, and the geology and evolution of specific asteroids like Gaspra.

  17. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  18. Stochastic optimal operation of reservoirs based on copula functions

    NASA Astrophysics Data System (ADS)

    Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen

    2018-02-01

    Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.

  19. ZERODUR: deterministic approach for strength design

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2012-12-01

    There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.

  20. Two tandem queues with general renewal input. 2: Asymptotic expansions for the diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knessl, C.; Tier, C.

    1999-10-01

    In Part 1 the authors formulated and solved a diffusion model for two tandem queues with exponential servers and general renewal arrivals. They thus obtained the easy traffic diffusion approximation to the steady state joint queue length distribution for this network. Here they study asymptotic and numerical properties of the diffusion approximation. In particular, analytical expressions are obtained for the tail probabilities. Both the joint distribution of the two queues and the marginal distribution of the second queue are considered. They also give numerical illustrations of how this marginal is affected by changes in the arrival and service processes.

  1. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  2. Representation of complex probabilities and complex Gibbs sampling

    NASA Astrophysics Data System (ADS)

    Salcedo, Lorenzo Luis

    2018-03-01

    Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.

  3. The Homotopic Probability Distribution and the Partition Function for the Entangled System Around a Ribbon Segment Chain

    NASA Astrophysics Data System (ADS)

    Qian, Shang-Wu; Gu, Zhi-Yu

    2001-12-01

    Using the Feynman's path integral with topological constraints arising from the presence of one singular line, we find the homotopic probability distribution P_L^n for the winding number n and the partition function P_L of the entangled system around a ribbon segment chain. We find that when the width of the ribbon segment chain 2a increases,the partition function exponentially decreases, whereas the free energy increases an amount, which is proportional to the square of the width. When the width tends to zero we obtain the same results as those of a single chain with one singular point.

  4. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  5. Digital simulation of an arbitrary stationary stochastic process by spectral representation.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2011-04-01

    In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America

  6. Quantum probability assignment limited by relativistic causality.

    PubMed

    Han, Yeong Deok; Choi, Taeseung

    2016-03-14

    Quantum theory has nonlocal correlations, which bothered Einstein, but found to satisfy relativistic causality. Correlation for a shared quantum state manifests itself, in the standard quantum framework, by joint probability distributions that can be obtained by applying state reduction and probability assignment that is called Born rule. Quantum correlations, which show nonlocality when the shared state has an entanglement, can be changed if we apply different probability assignment rule. As a result, the amount of nonlocality in quantum correlation will be changed. The issue is whether the change of the rule of quantum probability assignment breaks relativistic causality. We have shown that Born rule on quantum measurement is derived by requiring relativistic causality condition. This shows how the relativistic causality limits the upper bound of quantum nonlocality through quantum probability assignment.

  7. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  8. An efficient distribution method for nonlinear transport problems in stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, F.; Tchelepi, H.; Meyer, D. W.

    2015-12-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.

  9. Value assignment and uncertainty evaluation for single-element reference solutions

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Bodnar, Olha; Butler, Therese A.; Molloy, John L.; Winchester, Michael R.

    2018-06-01

    A Bayesian statistical procedure is proposed for value assignment and uncertainty evaluation for the mass fraction of the elemental analytes in single-element solutions distributed as NIST standard reference materials. The principal novelty that we describe is the use of information about relative differences observed historically between the measured values obtained via gravimetry and via high-performance inductively coupled plasma optical emission spectrometry, to quantify the uncertainty component attributable to between-method differences. This information is encapsulated in a prior probability distribution for the between-method uncertainty component, and it is then used, together with the information provided by current measurement data, to produce a probability distribution for the value of the measurand from which an estimate and evaluation of uncertainty are extracted using established statistical procedures.

  10. Average fidelity between random quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5

    2005-03-01

    We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.

  11. Multivariate hydrological frequency analysis for extreme events using Archimedean copula. Case study: Lower Tunjuelo River basin (Colombia)

    NASA Astrophysics Data System (ADS)

    Gómez, Wilmar

    2017-04-01

    By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.

  12. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  13. Assessing the Risk of Primary Amoebic Meningoencephalitis from Swimming in the Presence of Environmental Naegleria fowleri

    PubMed Central

    Cabanes, Pierre-André; Wallet, France; Pringuez, Emmanuelle; Pernin, Pierre

    2001-01-01

    Free-living Naegleria fowleri amoebae cause primary amoebic meningoencephalitis (PAM). Because of the apparent conflict between their ubiquity and the rarity of cases observed, we sought to develop a model characterizing the risk of PAM after swimming as a function of the concentration of N. fowleri. The probability of death from PAM as a function of the number of amoebae inhaled is modeled according to results obtained from animals infected with amoeba strains. The calculation of the probability of inhaling one or more amoebae while swimming is based on a double hypothesis: that the distribution of amoebae in the water follows a Poisson distribution and that the mean quantity of water inhaled while swimming is 10 ml. The risk of PAM for a given concentration of amoebae is then obtained by summing the following products: the probability of inhaling n amoebae × the probability of PAM associated with inhaling these n amoebae. We chose the lognormal model to assess the risk of PAM because it yielded the best analysis of the studentized residuals. Nonetheless, the levels of risk thereby obtained cannot be applied to humans without correction, because they are substantially greater than those indicated by available epidemiologic data. The curve was thus adjusted by a factor calculated with the least-squares method. This provides the PAM risk in humans as a function of the N. fowleri concentration in the river. For example, the risk is 8.5 × 10−8 at a concentration of 10 N. fowleri amoebae per liter. PMID:11425704

  14. A comparison of two methods for expert elicitation in health technology assessments.

    PubMed

    Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken

    2016-07-26

    When data needed to inform parameters in decision models are lacking, formal elicitation of expert judgement can be used to characterise parameter uncertainty. Although numerous methods for eliciting expert opinion as probability distributions exist, there is little research to suggest whether one method is more useful than any other method. This study had three objectives: (i) to obtain subjective probability distributions characterising parameter uncertainty in the context of a health technology assessment; (ii) to compare two elicitation methods by eliciting the same parameters in different ways; (iii) to collect subjective preferences of the experts for the different elicitation methods used. Twenty-seven clinical experts were invited to participate in an elicitation exercise to inform a published model-based cost-effectiveness analysis of alternative treatments for prostate cancer. Participants were individually asked to express their judgements as probability distributions using two different methods - the histogram and hybrid elicitation methods - presented in a random order. Individual distributions were mathematically aggregated across experts with and without weighting. The resulting combined distributions were used in the probabilistic analysis of the decision model and mean incremental cost-effectiveness ratios and the expected values of perfect information (EVPI) were calculated for each method, and compared with the original cost-effectiveness analysis. Scores on the ease of use of the two methods and the extent to which the probability distributions obtained from each method accurately reflected the expert's opinion were also recorded. Six experts completed the task. Mean ICERs from the probabilistic analysis ranged between £162,600-£175,500 per quality-adjusted life year (QALY) depending on the elicitation and weighting methods used. Compared to having no information, use of expert opinion decreased decision uncertainty: the EVPI value at the £30,000 per QALY threshold decreased by 74-86 % from the original cost-effectiveness analysis. Experts indicated that the histogram method was easier to use, but attributed a perception of more accuracy to the hybrid method. Inclusion of expert elicitation can decrease decision uncertainty. Here, choice of method did not affect the overall cost-effectiveness conclusions, but researchers intending to use expert elicitation need to be aware of the impact different methods could have.

  15. Weighing Clinical Evidence Using Patient Preferences: An Application of Probabilistic Multi-Criteria Decision Analysis.

    PubMed

    Broekhuizen, Henk; IJzerman, Maarten J; Hauber, A Brett; Groothuis-Oudshoorn, Catharina G M

    2017-03-01

    The need for patient engagement has been recognized by regulatory agencies, but there is no consensus about how to operationalize this. One approach is the formal elicitation and use of patient preferences for weighing clinical outcomes. The aim of this study was to demonstrate how patient preferences can be used to weigh clinical outcomes when both preferences and clinical outcomes are uncertain by applying a probabilistic value-based multi-criteria decision analysis (MCDA) method. Probability distributions were used to model random variation and parameter uncertainty in preferences, and parameter uncertainty in clinical outcomes. The posterior value distributions and rank probabilities for each treatment were obtained using Monte-Carlo simulations. The probability of achieving the first rank is the probability that a treatment represents the highest value to patients. We illustrated our methodology for a simplified case on six HIV treatments. Preferences were modeled with normal distributions and clinical outcomes were modeled with beta distributions. The treatment value distributions showed the rank order of treatments according to patients and illustrate the remaining decision uncertainty. This study demonstrated how patient preference data can be used to weigh clinical evidence using MCDA. The model takes into account uncertainty in preferences and clinical outcomes. The model can support decision makers during the aggregation step of the MCDA process and provides a first step toward preference-based personalized medicine, yet requires further testing regarding its appropriate use in real-world settings.

  16. Predicting bottlenose dolphin distribution along Liguria coast (northwestern Mediterranean Sea) through different modeling techniques and indirect predictors.

    PubMed

    Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P

    2015-03-01

    Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  18. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  19. Volume-weighted measure for eternal inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winitzki, Sergei

    2008-08-15

    I propose a new volume-weighted probability measure for cosmological 'multiverse' scenarios involving eternal inflation. The 'reheating-volume (RV) cutoff' calculates the distribution of observable quantities on a portion of the reheating hypersurface that is conditioned to be finite. The RV measure is gauge-invariant, does not suffer from the 'youngness paradox', and is independent of initial conditions at the beginning of inflation. In slow-roll inflationary models with a scalar inflaton, the RV-regulated probability distributions can be obtained by solving nonlinear diffusion equations. I discuss possible applications of the new measure to 'landscape' scenarios with bubble nucleation. As an illustration, I compute themore » predictions of the RV measure in a simple toy landscape.« less

  20. Zipf 's law and the effect of ranking on probability distributions

    NASA Astrophysics Data System (ADS)

    Günther, R.; Levitin, L.; Schapiro, B.; Wagner, P.

    1996-02-01

    Ranking procedures are widely used in the description of many different types of complex systems. Zipf's law is one of the most remarkable frequency-rank relationships and has been observed independently in physics, linguistics, biology, demography, etc. We show that ranking plays a crucial role in making it possible to detect empirical relationships in systems that exist in one realization only, even when the statistical ensemble to which the systems belong has a very broad probability distribution. Analytical results and numerical simulations are presented which clarify the relations between the probability distributions and the behavior of expected values for unranked and ranked random variables. This analysis is performed, in particular, for the evolutionary model presented in our previous papers which leads to Zipf's law and reveals the underlying mechanism of this phenomenon in terms of a system with interdependent and interacting components as opposed to the “ideal gas” models suggested by previous researchers. The ranking procedure applied to this model leads to a new, unexpected phenomenon: a characteristic “staircase” behavior of the mean values of the ranked variables (ranked occupation numbers). This result is due to the broadness of the probability distributions for the occupation numbers and does not follow from the “ideal gas” model. Thus, it provides an opportunity, by comparison with empirical data, to obtain evidence as to which model relates to reality.

  1. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  2. Maximum entropy analysis of NMR data of flexible multirotor molecules partially oriented in nematic solution: 2,2':5',2″-terthiophene, 2,2'- and 3,3'-dithiophene

    NASA Astrophysics Data System (ADS)

    Caldarelli, Stefano; Catalano, Donata; Di Bari, Lorenzo; Lumetti, Marco; Ciofalo, Maurizio; Alberto Veracini, Carlo

    1994-07-01

    The dipolar couplings observed by NMR spectroscopy of solutes in nematic solvents (LX-NMR) are used to build up the maximum entropy (ME) probability distribution function of the variables describing the orientational and internal motion of the molecule. The ME conformational distributions of 2,2'- and 3,3'-dithiophene and 2,2':5',2″-terthiophene (α-terthienyl)thus obtained are compared with the results of previous studies. The 2,2'- and 3,3'-dithiophene molecules exhibit equilibria among cisoid and transoid forms; the probability maxima correspond to planar and twisted conformers for 2,2'- or 3,3'-dithiophene, respectively, 2,2':5',2″-Terthiophene has two internal degrees of freedom; the ME approach indicates that the trans, trans and cis, trans planar conformations are the most probable. The correlation between the two intramolecular rotations is also discussed.

  3. A Pearson VII distribution function for fast calculation of dechanneling and angular dispersion of beams

    NASA Astrophysics Data System (ADS)

    Shao, Lin; Peng, Luohan

    2009-12-01

    Although multiple scattering theories have been well developed, numerical calculation is complicated and only tabulated values have been available, which has caused inconvenience in practical use. We have found that a Pearson VII distribution function can be used to fit Lugujjo and Mayer's probability curves in describing the dechanneling phenomenon in backscattering analysis, over a wide range of disorder levels. Differentiation of the obtained function gives another function to calculate angular dispersion of the beam in the frameworks by Sigmund and Winterbon. The present work provides an easy calculation of both dechanneling probability and angular dispersion for any arbitrary combination of beam and target having a reduced thickness ⩾0.6, which can be implemented in modeling of channeling spectra. Furthermore, we used a Monte Carlo simulation program to calculate the deflection probability and compared them with previously tabulated data. A good agreement was reached.

  4. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  5. Velocity statistics of the Nagel-Schreckenberg model

    NASA Astrophysics Data System (ADS)

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  6. Velocity statistics of the Nagel-Schreckenberg model.

    PubMed

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  7. A smooth mixture of Tobits model for healthcare expenditure.

    PubMed

    Keane, Michael; Stavrunova, Olena

    2011-09-01

    This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks

    NASA Astrophysics Data System (ADS)

    Frahm, Klaus M.; Shepelyansky, Dima L.

    2014-04-01

    We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.

  9. Study of Heavy-ion Induced Fission for Heavy Element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.; Ikezoe, H.; Hofmann, S.; Ackermann, D.; Aritomo, Y.; Comas, V. F.; Düllmann, Ch. E.; Heinz, S.; Heredia, J. A.; Heßberger, F. P.; Hirose, K.; Khuyagbaatar, J.; Kindler, B.; Kojouharov, I.; Lommel, B.; Makii, M.; Mann, R.; Mitsuoka, S.; Nishinaka, I.; Ohtsuki, T.; Saro, S.; Schädel, M.; Popeko, A. G.; Türler, A.; Wakabayashi, Y.; Watanabe, Y.; Yakushev, A.; Yeremin, A.

    2014-05-01

    Fission fragment mass distributions were measured in heavy-ion induced fission of 238U. The mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and quasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model for the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that sub-barrier energies can be used for heavy element synthesis.

  10. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  11. Combining Probability Distributions of Wind Waves and Sea Level Variations to Assess Return Periods of Coastal Floods

    NASA Astrophysics Data System (ADS)

    Leijala, U.; Bjorkqvist, J. V.; Pellikka, H.; Johansson, M. M.; Kahma, K. K.

    2017-12-01

    Predicting the behaviour of the joint effect of sea level and wind waves is of great significance due to the major impact of flooding events in densely populated coastal regions. As mean sea level rises, the effect of sea level variations accompanied by the waves will be even more harmful in the future. The main challenge when evaluating the effect of waves and sea level variations is that long time series of both variables rarely exist. Wave statistics are also highly location-dependent, thus requiring wave buoy measurements and/or high-resolution wave modelling. As an initial approximation of the joint effect, the variables may be treated as independent random variables, to achieve the probability distribution of their sum. We present results of a case study based on three probability distributions: 1) wave run-up constructed from individual wave buoy measurements, 2) short-term sea level variability based on tide gauge data, and 3) mean sea level projections based on up-to-date regional scenarios. The wave measurements were conducted during 2012-2014 on the coast of city of Helsinki located in the Gulf of Finland in the Baltic Sea. The short-term sea level distribution contains the last 30 years (1986-2015) of hourly data from Helsinki tide gauge, and the mean sea level projections are scenarios adjusted for the Gulf of Finland. Additionally, we present a sensitivity test based on six different theoretical wave height distributions representing different wave behaviour in relation to sea level variations. As these wave distributions are merged with one common sea level distribution, we can study how the different shapes of the wave height distribution affect the distribution of the sum, and which one of the components is dominating under different wave conditions. As an outcome of the method, we obtain a probability distribution of the maximum elevation of the continuous water mass, which enables a flexible tool for evaluating different risk levels in the current and future climate.

  12. An efficient distribution method for nonlinear transport problems in highly heterogeneous stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi

    2016-04-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.

  13. Mass-yield distributions of fission products from 20, 32, and 45 MeV proton-induced fission of 232Th

    NASA Astrophysics Data System (ADS)

    Naik, H.; Goswami, A.; Kim, G. N.; Kim, K.; Suryanarayana, S. V.

    2013-10-01

    The yields of various fission products in the 19.6, 32.2, and 44.8 MeV proton-induced fission of 232Th have been determined by recoil catcher and an off-line γ-ray spectrometric technique using the BARC-TIFR Pelletron in India and MC-50 cyclotron in Korea. The mass-yield distributions were obtained from the fission product yield using the charge distribution corrections. The peak-to-valley (P/V) ratio of the present work and that of literature data for 232Th(p,f) and 238U(p,f) were obtained from the mass yield distribution. The present and the existing literature data for 232Th(p,f), 232Th(n,f), and 232Th( γ,f) at various energies were compared with those for 238U(p,f), 238U(n,f), and 238U( γ,f) to examine the probable nuclear structure effect. The role of Th-anomaly on the peak-to-valley ratio in proton-, neutron-, and photon-induced fission of 232Th was discussed with the similar data in 238U. On the other hand, the fine structure in the mass yield distributions of the fissioning systems at various excitation energies has been explained from the point of standard I and II asymmetric mode of fission besides the probable role of even-odd effect, A/ Z ratio, and fissility parameter.

  14. Number distribution of emitted electrons by MeV H+ impact on carbon

    NASA Astrophysics Data System (ADS)

    Ogawa, H.; Koyanagi, Y.; Hongo, N.; Ishii, K.; Kaneko, T.

    2017-09-01

    The statistical distributions of the number of the forward- and backward-emitted secondary electrons (SE's) from a thin carbon foil have been measured in coincidence with foil-transmitted H+ ions of 0.5-3.0 MeV in every 0.5 MeV step. The measured SE energy spectra were fitted by assuming a Pólya distribution for the simultaneous n-SE emission probabilities. For our previous data with a couple of the carbon foils with different thicknesses, a similar analysis has been carried out. As a result, it was found that the measured spectra could be reproduced as well as by an analysis without placing any restriction on the emission probabilities both for the forward and backward SE emission. The obtained b-parameter of the Pólya distribution, which is a measure of the deviation from a Poisson distribution due to the cascade multiplication by high energy internal SE's, increases monotonically with the incident energy of proton beams. On the other hand, a clear foil-thickness dependence is not observed for the b-parameter. A theoretical model which could reproduced the magnitude of the b-parameter for the SE energy spectra obtained with thick Au, Cu and Al targets is found to overestimates our values for thin carbon foils significantly. Another model calculation is found to reproduce our b-values very well.

  15. Multi-scale Characterization and Modeling of Surface Slope Probability Distribution for ~20-km Diameter Lunar Craters

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Robinson, M. S.; Boyd, A. K.

    2013-12-01

    Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was computed over multiple scales. This slope analysis showed that local slope distributions are non-Gaussian for both crater walls and floors. Over larger baselines (~100 meters), crater wall slope probability distributions do approximate Gaussian distributions better, but have long distribution tails. Crater floor probability distributions however, were always asymmetric (for the baseline scales analyzed) and less affected by baseline scale variations. Accordingly, our results suggest that use of long tailed probability distributions (like Cauchy) and a baseline-dependant multi-scale model can be more effective in describing the slope statistics for lunar topography. Refrences: [1]Moore, H.(1971), JGR,75(11) [2]Marcus, A. H.(1969),JGR,74 (22).[3]R.J. Pike (1970),U.S. Geological Survey Working Paper [4]N. C. Costes, J. E. Farmer and E. B. George (1972),NASA Technical Report TR R-401 [5]M. N. Parker and G. L. Tyler(1973), Radio Science, 8(3),177-184 [6]Alekseev, V. A.et al (1968), Soviet Astronomy, Vol. 11, p.860 [7]Burns et al. (2012) Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXIX-B4, 483-488.[8]Smith et al. (2010) GRL 37, L18204, DOI: 10.1029/2010GL043751. [9]Wagner R., Robinson, M., Speyerer E., Mahanti, P., LPSC 2013, #2924.

  16. A Bayesian Approach to Magnetic Moment Determination Using μSR

    NASA Astrophysics Data System (ADS)

    Blundell, S. J.; Steele, A. J.; Lancaster, T.; Wright, J. D.; Pratt, F. L.

    A significant challenge in zero-field μSR experiments arises from the uncertainty in the muon site. It is possible to calculate the dipole field (and hence precession frequency v) at any particular site given the magnetic moment μ and magnetic structure. One can also evaluate f(v), the probability distribution function of v assuming that the muon site can be anywhere within the unit cell with equal probability, excluding physically forbidden sites. Since v is obtained from experiment, what we would like to know is g(μjv), the probability density function of μ given the observed v. This can be obtained from our calculated f(v/μ) using Bayes' theorem. We describe an approach to this problem which we have used to extract information about real systems including a low-moment osmate compound, a family of molecular magnets, and an iron-arsenide compound.

  17. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  18. A Monte Carlo study of fluorescence generation probability in a two-layered tissue model

    NASA Astrophysics Data System (ADS)

    Milej, Daniel; Gerega, Anna; Wabnitz, Heidrun; Liebert, Adam

    2014-03-01

    It was recently reported that the time-resolved measurement of diffuse reflectance and/or fluorescence during injection of an optical contrast agent may constitute a basis for a technique to assess cerebral perfusion. In this paper, we present results of Monte Carlo simulations of the propagation of excitation photons and tracking of fluorescence photons in a two-layered tissue model mimicking intra- and extracerebral tissue compartments. Spatial 3D distributions of the probability that the photons were converted from excitation to emission wavelength in a defined voxel of the medium (generation probability) during their travel between source and detector were obtained for different optical properties in intra- and extracerebral tissue compartments. It was noted that the spatial distribution of the generation probability depends on the distribution of the fluorophore in the medium and is influenced by the absorption of the medium and of the fluorophore at excitation and emission wavelengths. Simulations were also carried out for realistic time courses of the dye concentration in both layers. The results of the study show that the knowledge of the absorption properties of the medium at excitation and emission wavelengths is essential for the interpretation of the time-resolved fluorescence signals measured on the surface of the head.

  19. Wireless cellular networks with Pareto-distributed call holding times

    NASA Astrophysics Data System (ADS)

    Rodriguez-Dagnino, Ramon M.; Takagi, Hideaki

    2001-07-01

    Nowadays, there is a growing interest in providing internet to mobile users. For instance, NTT DoCoMo in Japan deploys an important mobile phone network with that offers the Internet service, named 'i-mode', to more than 17 million subscribers. Internet traffic measurements show that the session duration of Call Holding Time (CHT) has probability distributions with heavy-tails, which tells us that they depart significantly from the traffic statistics of traditional voice services. In this environment, it is particularly important to know the number of handovers during a call for a network designer to make an appropriate dimensioning of virtual circuits for a wireless cell. The handover traffic has a direct impact on the Quality of Service (QoS); e.g. the service disruption due to the handover failure may significantly degrade the specified QoS of time-constrained services. In this paper, we first study the random behavior of the number of handovers during a call, where we assume that the CHT are Pareto distributed (heavy-tail distribution), and the Cell Residence Times (CRT) are exponentially distributed. Our approach is based on renewal theory arguments. We present closed-form formulae for the probability mass function (pmf) of the number of handovers during a Pareto distributed CHT, and obtain the probability of call completion as well as handover rates. Most of the formulae are expressed in terms of the Whittaker's function. We compare the Pareto case with cases of $k(subscript Erlang and hyperexponential distributions for the CHT.

  20. Risk-based maintenance of ethylene oxide production facilities.

    PubMed

    Khan, Faisal I; Haddara, Mahmoud R

    2004-05-20

    This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.

  1. Comparing hard and soft prior bounds in geophysical inverse problems

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.

  2. Comparing hard and soft prior bounds in geophysical inverse problems

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1987-01-01

    In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.

  3. ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density.

    PubMed

    Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro

    2018-01-01

    The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done.

  4. ExGUtils: A Python Package for Statistical Analysis With the ex-Gaussian Probability Density

    PubMed Central

    Moret-Tatay, Carmen; Gamermann, Daniel; Navarro-Pardo, Esperanza; Fernández de Córdoba Castellá, Pedro

    2018-01-01

    The study of reaction times and their underlying cognitive processes is an important field in Psychology. Reaction times are often modeled through the ex-Gaussian distribution, because it provides a good fit to multiple empirical data. The complexity of this distribution makes the use of computational tools an essential element. Therefore, there is a strong need for efficient and versatile computational tools for the research in this area. In this manuscript we discuss some mathematical details of the ex-Gaussian distribution and apply the ExGUtils package, a set of functions and numerical tools, programmed for python, developed for numerical analysis of data involving the ex-Gaussian probability density. In order to validate the package, we present an extensive analysis of fits obtained with it, discuss advantages and differences between the least squares and maximum likelihood methods and quantitatively evaluate the goodness of the obtained fits (which is usually an overlooked point in most literature in the area). The analysis done allows one to identify outliers in the empirical datasets and criteriously determine if there is a need for data trimming and at which points it should be done. PMID:29765345

  5. A comparison of numerical solutions of partial differential equations with probabilistic and possibilistic parameters for the quantification of uncertainty in subsurface solute transport.

    PubMed

    Zhang, Kejiang; Achari, Gopal; Li, Hua

    2009-11-03

    Traditionally, uncertainty in parameters are represented as probabilistic distributions and incorporated into groundwater flow and contaminant transport models. With the advent of newer uncertainty theories, it is now understood that stochastic methods cannot properly represent non random uncertainties. In the groundwater flow and contaminant transport equations, uncertainty in some parameters may be random, whereas those of others may be non random. The objective of this paper is to develop a fuzzy-stochastic partial differential equation (FSPDE) model to simulate conditions where both random and non random uncertainties are involved in groundwater flow and solute transport. Three potential solution techniques namely, (a) transforming a probability distribution to a possibility distribution (Method I) then a FSPDE becomes a fuzzy partial differential equation (FPDE), (b) transforming a possibility distribution to a probability distribution (Method II) and then a FSPDE becomes a stochastic partial differential equation (SPDE), and (c) the combination of Monte Carlo methods and FPDE solution techniques (Method III) are proposed and compared. The effects of these three methods on the predictive results are investigated by using two case studies. The results show that the predictions obtained from Method II is a specific case of that got from Method I. When an exact probabilistic result is needed, Method II is suggested. As the loss or gain of information during a probability-possibility (or vice versa) transformation cannot be quantified, their influences on the predictive results is not known. Thus, Method III should probably be preferred for risk assessments.

  6. Time analysis of volcanic activity on Io by means of plasma observations

    NASA Technical Reports Server (NTRS)

    Mekler, Y.; Eviatar, A.

    1980-01-01

    A model of Io volcanism in which the probability of activity obeys a binomial distribution is presented. Observed values of the electron density obtained over a 3-year period by ground-based spectroscopy are fitted to such a distribution. The best fit is found for a total number of 15 volcanoes with a probability of individual activity at any time of 0.143. The Pioneer 10 ultraviolet observations are reinterpreted as emissions of sulfur and oxygen ions and are found to be consistent with a plasma much less dense than that observed by the Voyager spacecraft. Late 1978 and the first half of 1979 are shown to be periods of anomalous volcanicity. Rapid variations in electron density are related to enhanced radial diffusion.

  7. Applications of the first digit law to measure correlations.

    PubMed

    Gramm, R; Yost, J; Su, Q; Grobe, R

    2017-04-01

    The quasiempirical Benford law predicts that the distribution of the first significant digit of random numbers obtained from mixed probability distributions is surprisingly meaningful and reveals some universal behavior. We generalize this finding to examine the joint first-digit probability of a pair of two random numbers and show that undetectable correlations by means of the usual covariance-based measure can be identified in the statistics of the corresponding first digits. We illustrate this new measure by analyzing the correlations and anticorrelations of the positions of two interacting particles in their quantum mechanical ground state. This suggests that by using this measure, the presence or absence of correlations can be determined even if only the first digit of noisy experimental data can be measured accurately.

  8. A hydroclimatological approach to predicting regional landslide probability using Landlab

    NASA Astrophysics Data System (ADS)

    Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.

    2018-02-01

    We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.

  9. Uranium distribution and 'excessive' U-He ages in iron meteoritic troilite

    NASA Technical Reports Server (NTRS)

    Fisher, D. E.

    1985-01-01

    Fission tracking techniques were used to measure the uranium distribution in meteoritic troilite and graphite. The obtained fission tracking data showed a heterogeneous distribution of tracks with a significant portion of track density present in the form of uranium clusters at least 10 microns in size. The matrix containing the clusters was also heterogeneous in composition with U concentrations of about 0.2-4.7 ppb. U/He ages could not be estimated on the basis of the heterogeneous U distributions, so previously reported estimates of U/He ages in the presolar range are probably invalid.

  10. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  11. High-precision simulation of the height distribution for the KPZ equation

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Le Doussal, Pierre; Majumdar, Satya N.; Rosso, Alberto; Schehr, Gregory

    2018-03-01

    The one-point distribution of the height for the continuum Kardar-Parisi-Zhang (KPZ) equation is determined numerically using the mapping to the directed polymer in a random potential at high temperature. Using an importance sampling approach, the distribution is obtained over a large range of values, down to a probability density as small as 10-1000 in the tails. Both short and long times are investigated and compared with recent analytical predictions for the large-deviation forms of the probability of rare fluctuations. At short times the agreement with the analytical expression is spectacular. We observe that the far left and right tails, with exponents 5/2 and 3/2, respectively, are preserved also in the region of long times. We present some evidence for the predicted non-trivial crossover in the left tail from the 5/2 tail exponent to the cubic tail of the Tracy-Widom distribution, although the details of the full scaling form remain beyond reach.

  12. On the properties of stochastic intermittency in rainfall processes.

    PubMed

    Molini, A; La, Barbera P; Lanza, L G

    2002-01-01

    In this work we propose a mixed approach to deal with the modelling of rainfall events, based on the analysis of geometrical and statistical properties of rain intermittency in time, combined with the predictability power derived from the analysis of no-rain periods distribution and from the binary decomposition of the rain signal. Some recent hypotheses on the nature of rain intermittency are reviewed too. In particular, the internal intermittent structure of a high resolution pluviometric time series covering one decade and recorded at the tipping bucket station of the University of Genova is analysed, by separating the internal intermittency of rainfall events from the inter-arrival process through a simple geometrical filtering procedure. In this way it is possible to associate no-rain intervals with a probability distribution both in virtue of their position within the event and their percentage. From this analysis, an invariant probability distribution for the no-rain periods within the events is obtained at different aggregation levels and its satisfactory agreement with a typical extreme value distribution is shown.

  13. Clustering coefficients of protein-protein interaction networks

    NASA Astrophysics Data System (ADS)

    Miller, Gerald A.; Shi, Yi Y.; Qian, Hong; Bomsztyk, Karol

    2007-05-01

    The properties of certain networks are determined by hidden variables that are not explicitly measured. The conditional probability (propagator) that a vertex with a given value of the hidden variable is connected to k other vertices determines all measurable properties. We study hidden variable models and find an averaging approximation that enables us to obtain a general analytical result for the propagator. Analytic results showing the validity of the approximation are obtained. We apply hidden variable models to protein-protein interaction networks (PINs) in which the hidden variable is the association free energy, determined by distributions that depend on biochemistry and evolution. We compute degree distributions as well as clustering coefficients of several PINs of different species; good agreement with measured data is obtained. For the human interactome two different parameter sets give the same degree distributions, but the computed clustering coefficients differ by a factor of about 2. This shows that degree distributions are not sufficient to determine the properties of PINs.

  14. Electron density and electron temperature measurement in a bi-Maxwellian electron distribution using a derivative method of Langmuir probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Ikjin; Chung, ChinWook; Youn Moon, Se

    2013-08-15

    In plasma diagnostics with a single Langmuir probe, the electron temperature T{sub e} is usually obtained from the slope of the logarithm of the electron current or from the electron energy probability functions of current (I)-voltage (V) curve. Recently, Chen [F. F. Chen, Phys. Plasmas 8, 3029 (2001)] suggested a derivative analysis method to obtain T{sub e} by the ratio between the probe current and the derivative of the probe current at a plasma potential where the ion current becomes zero. Based on this method, electron temperatures and electron densities were measured and compared with those from the electron energymore » distribution function (EEDF) measurement in Maxwellian and bi-Maxwellian electron distribution conditions. In a bi-Maxwellian electron distribution, we found the electron temperature T{sub e} obtained from the method is always lower than the effective temperatures T{sub eff} derived from EEDFs. The theoretical analysis for this is presented.« less

  15. Anomalous transport in fluid field with random waiting time depending on the preceding jump length

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Li, Guo-Hua

    2016-11-01

    Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).

  16. In-beam fissio study at JAEA for heavy element synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.; Ikezoe, H.; Hofmann, S.; Ackermann, D.; Aritomo, Y.; Comas, V. F.; Düllmann, Ch. E.; Heinz, S.; Heredia, J. A.; Heßberger, F. P.; Hirose, K.; Khuyagbaatar, J.; Kindler, B.; Kojouharov, I.; Lommel, B.; Makii, M.; Mann, R.; Mitsuoka, S.; Nishinaka, I.; Ohtsuki, T.; Saro, S.; Schädel, M.; Popeko, A. G.; Türler, A.; Wakabayashi, Y.; Watanabe, Y.; Yakushev, A.; Yeremin, A.

    2013-04-01

    Fission fragment mass distributions were measured in the heavy-ion induced fission using 238U target nucleus. The mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that the sub-barrier energies can be used for heavy element synthesis.

  17. Work distributions for random sudden quantum quenches

    NASA Astrophysics Data System (ADS)

    Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter

    2017-05-01

    The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.

  18. Convergence of Transition Probability Matrix in CLVMarkov Models

    NASA Astrophysics Data System (ADS)

    Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.

    2018-04-01

    A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.

  19. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  20. Processor tradeoffs in distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Krishna, C. M.; Shin, Kang G.; Bhandari, Inderpal S.

    1987-01-01

    The problem of the optimization of the design of real-time distributed systems is examined with reference to a class of computer architectures similar to the continuously reconfigurable multiprocessor flight control system structure, CM2FCS. Particular attention is given to the impact of processor replacement and the burn-in time on the probability of dynamic failure and mean cost. The solution is obtained numerically and interpreted in the context of real-time applications.

  1. Spatial distribution of traffic in a cellular mobile data network

    NASA Astrophysics Data System (ADS)

    Linnartz, J. P. M. G.

    1987-02-01

    The use of integral transforms of the probability density function for the received power to analyze the relation between the spatial distributions of offered and throughout packet traffic in a mobile radio network with Rayleigh fading channels and ALOHA multiple access was assessed. A method to obtain the spatial distribution of throughput traffic from a prescribed spatial distribution of offered traffic is presented. Incoherent and coherent addition of interference signals is considered. The channel behavior for heavy traffic loads is studied. In both the incoherent and coherent case, the spatial distribution of offered traffic required to ensure a prescribed spatially uniform throughput is synthesized numerically.

  2. Risk-based decision making to manage water quality failures caused by combined sewer overflows

    NASA Astrophysics Data System (ADS)

    Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.

    2017-12-01

    Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.

  3. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  4. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    PubMed Central

    Lam, William H. K.; Li, Qingquan

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978

  5. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    PubMed

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  6. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    NASA Astrophysics Data System (ADS)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  7. Estimation of distributional parameters for censored trace level water quality data: 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, Robert J.; Helsel, Dennis R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.

  8. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1986-02-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less

  9. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  10. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Njiki-Menga, G.-H.; Witschger, O.

    2013-04-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a quantitative estimation of the airborne particles released at the source when the task is performed. Beyond obtained results, this exploratory study indicates that the analysis of the results requires specific experience in statistics.

  11. Exact solutions for the selection-mutation equilibrium in the Crow-Kimura evolutionary model.

    PubMed

    Semenov, Yuri S; Novozhilov, Artem S

    2015-08-01

    We reformulate the eigenvalue problem for the selection-mutation equilibrium distribution in the case of a haploid asexually reproduced population in the form of an equation for an unknown probability generating function of this distribution. The special form of this equation in the infinite sequence limit allows us to obtain analytically the steady state distributions for a number of particular cases of the fitness landscape. The general approach is illustrated by examples; theoretical findings are compared with numerical calculations. Copyright © 2015. Published by Elsevier Inc.

  12. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  13. [Clinical evaluation of heavy-particle radiotherapy using dose volume histogram (DVH)].

    PubMed

    Terahara, A; Nakano, T; Tsujii, H

    1998-01-01

    Radiotherapy with heavy particles such as proton and heavy-charged particles is a promising modality for treatment of localized malignant tumors because of the good dose distribution. A dose calculation and radiotherapy planning system which is essential for this kind of treatment has been developed in recent years. It has the capability to compute the dose volume histogram (DVH) which contains dose-volume information for the target volume and other interesting volumes. Recently, DVH is commonly used to evaluate and compare dose distributions in radiotherapy with both photon and heavy particles, and it shows that a superior dose distribution is obtained in heavy particle radiotherapy. DVH is also utilized for the evaluation of dose distribution related to clinical outcomes. Besides models such as normal tissue complication probability (NTCP) and tumor control probability (TCP), which can be calculated from DVH are proposed by several authors, they are applied to evaluate dose distributions themselves and to evaluate them in relation to clinical results. DVH is now a useful and important tool, but further studies are needed to use DVH and these models practically for clinical evaluation of heavy-particle radiotherapy.

  14. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    USGS Publications Warehouse

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  15. Probabilistic approach to lysozyme crystal nucleation kinetics.

    PubMed

    Dimitrov, Ivaylo L; Hodzhaoglu, Feyzim V; Koleva, Dobryana P

    2015-09-01

    Nucleation of lysozyme crystals in quiescent solutions at a regime of progressive nucleation is investigated under an optical microscope at conditions of constant supersaturation. A method based on the stochastic nature of crystal nucleation and using discrete time sampling of small solution volumes for the presence or absence of detectable crystals is developed. It allows probabilities for crystal detection to be experimentally estimated. One hundred single samplings were used for each probability determination for 18 time intervals and six lysozyme concentrations. Fitting of a particular probability function to experimentally obtained data made possible the direct evaluation of stationary rates for lysozyme crystal nucleation, the time for growth of supernuclei to a detectable size and probability distribution of nucleation times. Obtained stationary nucleation rates were then used for the calculation of other nucleation parameters, such as the kinetic nucleation factor, nucleus size, work for nucleus formation and effective specific surface energy of the nucleus. The experimental method itself is simple and adaptable and can be used for crystal nucleation studies of arbitrary soluble substances with known solubility at particular solution conditions.

  16. Outage Analysis of Dual-hop Cognitive Networks with Relay Selection over Nakagami-m Fading Environment

    NASA Astrophysics Data System (ADS)

    Zhang, Zongsheng; Pi, Xurong

    2014-09-01

    In this paper, we investigate the outage performance of decode-and-forward cognitive relay networks for Nakagami-m fading channels, with considering both best relay selection and interference constraints. Focusing on the relay selection and making use of the underlay cognitive approach, an exact closed-form outage probability expression is derived in an independent, non-identical distributed Nakagami-m environment. The closed-form outage probability provides an efficient means to evaluate the effects of the maximum allowable interference power, number of cognitive relays, and channel conditions between the primary user and cognitive users. Finally, we present numerical results to validate the theory analysis. Moreover, from the simulation results, we obtain that the system can obtain the full diversity.

  17. A Bayesian Framework of Uncertainties Integration in 3D Geological Model

    NASA Astrophysics Data System (ADS)

    Liang, D.; Liu, X.

    2017-12-01

    3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.

  18. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  19. A multimodal detection model of dolphins to estimate abundance validated by field experiments.

    PubMed

    Akamatsu, Tomonari; Ura, Tamaki; Sugimatsu, Harumi; Bahl, Rajendar; Behera, Sandeep; Panda, Sudarsan; Khan, Muntaz; Kar, S K; Kar, C S; Kimura, Satoko; Sasaki-Yamamoto, Yukiko

    2013-09-01

    Abundance estimation of marine mammals requires matching of detection of an animal or a group of animal by two independent means. A multimodal detection model using visual and acoustic cues (surfacing and phonation) that enables abundance estimation of dolphins is proposed. The method does not require a specific time window to match the cues of both means for applying mark-recapture method. The proposed model was evaluated using data obtained in field observations of Ganges River dolphins and Irrawaddy dolphins, as examples of dispersed and condensed distributions of animals, respectively. The acoustic detection probability was approximately 80%, 20% higher than that of visual detection for both species, regardless of the distribution of the animals in present study sites. The abundance estimates of Ganges River dolphins and Irrawaddy dolphins fairly agreed with the numbers reported in previous monitoring studies. The single animal detection probability was smaller than that of larger cluster size, as predicted by the model and confirmed by field data. However, dense groups of Irrawaddy dolphins showed difference in cluster sizes observed by visual and acoustic methods. Lower detection probability of single clusters of this species seemed to be caused by the clumped distribution of this species.

  20. Using new edges for anomaly detection in computer networks

    DOEpatents

    Neil, Joshua Charles

    2017-07-04

    Creation of new edges in a network may be used as an indication of a potential attack on the network. Historical data of a frequency with which nodes in a network create and receive new edges may be analyzed. Baseline models of behavior among the edges in the network may be established based on the analysis of the historical data. A new edge that deviates from a respective baseline model by more than a predetermined threshold during a time window may be detected. The new edge may be flagged as potentially anomalous when the deviation from the respective baseline model is detected. Probabilities for both new and existing edges may be obtained for all edges in a path or other subgraph. The probabilities may then be combined to obtain a score for the path or other subgraph. A threshold may be obtained by calculating an empirical distribution of the scores under historical conditions.

  1. Using new edges for anomaly detection in computer networks

    DOEpatents

    Neil, Joshua Charles

    2015-05-19

    Creation of new edges in a network may be used as an indication of a potential attack on the network. Historical data of a frequency with which nodes in a network create and receive new edges may be analyzed. Baseline models of behavior among the edges in the network may be established based on the analysis of the historical data. A new edge that deviates from a respective baseline model by more than a predetermined threshold during a time window may be detected. The new edge may be flagged as potentially anomalous when the deviation from the respective baseline model is detected. Probabilities for both new and existing edges may be obtained for all edges in a path or other subgraph. The probabilities may then be combined to obtain a score for the path or other subgraph. A threshold may be obtained by calculating an empirical distribution of the scores under historical conditions.

  2. Advances in modeling trait-based plant community assembly.

    PubMed

    Laughlin, Daniel C; Laughlin, David E

    2013-10-01

    In this review, we examine two new trait-based models of community assembly that predict the relative abundance of species from a regional species pool. The models use fundamentally different mathematical approaches and the predictions can differ considerably. Maxent obtains the most even probability distribution subject to community-weighted mean trait constraints. Traitspace predicts low probabilities for any species whose trait distribution does not pass through the environmental filter. Neither model maximizes functional diversity because of the emphasis on environmental filtering over limiting similarity. Traitspace can test for the effects of limiting similarity by explicitly incorporating intraspecific trait variation. The range of solutions in both models could be used to define the range of natural variability of community composition in restoration projects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. A Statistical Study of Serum Cholesterol Level by Gender and Race.

    PubMed

    Tharu, Bhikhari Prasad; Tsokos, Chris P

    2017-07-25

    Cholesterol level (CL) is growing concerned as health issue in human health since it is considered one of the causes in heart diseases. A study of cholesterol level can provide insight about its nature and characteristics. A cross-sectional study. National Health and Nutrition Examination Survey (NHANS) II was conducted on a probability sample of approximately 28,000 persons in the USA and cholesterol level is obtained from laboratory results. Samples were selected so that certain population groups thought to be at high risk of malnutrition. Study included 11,864 persons for CL cases with 9,602 males and 2,262 females with races: whites, blacks, and others. Non-parametric statistical tests and goodness of fit test have been used to identify probability distributions. The study concludes that the cholesterol level exhibits significant racial and gender differences in terms of probability distributions. The study has concluded that white people are relatively higher at risk than black people to have risk line and high risk cholesterol. The study clearly indicates that black males normally have higher cholesterol. Females have lower variation in cholesterol than males. There exists gender and racial discrepancies in cholesterol which has been identified as lognormal and gamma probability distributions. White individuals seem to be at a higher risk of having high risk cholesterol level than blacks. Females tend to have higher variation in cholesterol level than males.

  4. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  5. Derivation of low flow frequency distributions under human activities and its implications

    NASA Astrophysics Data System (ADS)

    Gao, Shida; Liu, Pan; Pan, Zhengke; Ming, Bo; Guo, Shenglian; Xiong, Lihua

    2017-06-01

    Low flow, refers to a minimum streamflow in dry seasons, is crucial to water supply, agricultural irrigation and navigation. Human activities, such as groundwater pumping, influence low flow severely. In order to derive the low flow frequency distribution functions under human activities, this study incorporates groundwater pumping and return flow as variables in the recession process. Steps are as follows: (1) the original low flow without human activities is assumed to follow a Pearson type three distribution, (2) the probability distribution of climatic dry spell periods is derived based on a base flow recession model, (3) the base flow recession model is updated under human activities, and (4) the low flow distribution under human activities is obtained based on the derived probability distribution of dry spell periods and the updated base flow recession model. Linear and nonlinear reservoir models are used to describe the base flow recession, respectively. The Wudinghe basin is chosen for the case study, with daily streamflow observations during 1958-2000. Results show that human activities change the location parameter of the low flow frequency curve for the linear reservoir model, while alter the frequency distribution function for the nonlinear one. It is indicated that alter the parameters of the low flow frequency distribution is not always feasible to tackle the changing environment.

  6. Slant path rain attenuation and path diversity statistics obtained through radar modeling of rain structure

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1984-01-01

    Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.

  7. Study of sea-surface slope distribution and its effect on radar backscatter based on Global Precipitation Measurement Ku-band precipitation radar measurements

    NASA Astrophysics Data System (ADS)

    Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin

    2018-01-01

    The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.

  8. A method to estimate stellar ages from kinematical data

    NASA Astrophysics Data System (ADS)

    Almeida-Fernandes, F.; Rocha-Pinto, H. J.

    2018-05-01

    We present a method to build a probability density function (PDF) for the age of a star based on its peculiar velocities U, V, and W and its orbital eccentricity. The sample used in this work comes from the Geneva-Copenhagen Survey (GCS) that contains the spatial velocities, orbital eccentricities, and isochronal ages for about 14 000 stars. Using the GCS stars, we fitted the parameters that describe the relations between the distributions of kinematical properties and age. This parametrization allows us to obtain an age probability from the kinematical data. From this age PDF, we estimate an individual average age for the star using the most likely age and the expected age. We have obtained the stellar age PDF for the age of 9102 stars from the GCS and have shown that the distribution of individual ages derived from our method is in good agreement with the distribution of isochronal ages. We also observe a decline in the mean metallicity with our ages for stars younger than 7 Gyr, similar to the one observed for isochronal ages. This method can be useful for the estimation of rough stellar ages for those stars that fall in areas of the Hertzsprung-Russell diagram where isochrones are tightly crowded. As an example of this method, we estimate the age of Trappist-1, which is a M8V star, obtaining the age of t(UVW) = 12.50(+0.29 - 6.23) Gyr.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conn, A. R.; Parker, Q. A.; Zucker, D. B.

    In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less

  10. Facility optimization to improve activation rate distributions during IVNAA.

    PubMed

    Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem

    2013-05-01

    Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body.

  11. Inference of emission rates from multiple sources using Bayesian probability theory.

    PubMed

    Yee, Eugene; Flesch, Thomas K

    2010-03-01

    The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.

  12. A probabilistic safety analysis of incidents in nuclear research reactors.

    PubMed

    Lopes, Valdir Maciel; Agostinho Angelo Sordi, Gian Maria; Moralles, Mauricio; Filho, Tufic Madi

    2012-06-01

    This work aims to evaluate the potential risks of incidents in nuclear research reactors. For its development, two databases of the International Atomic Energy Agency (IAEA) were used: the Research Reactor Data Base (RRDB) and the Incident Report System for Research Reactor (IRSRR). For this study, the probabilistic safety analysis (PSA) was used. To obtain the result of the probability calculations for PSA, the theory and equations in the paper IAEA TECDOC-636 were used. A specific program to analyse the probabilities was developed within the main program, Scilab 5.1.1. for two distributions, Fischer and chi-square, both with the confidence level of 90 %. Using Sordi equations, the maximum admissible doses to compare with the risk limits established by the International Commission on Radiological Protection (ICRP) were obtained. All results achieved with this probability analysis led to the conclusion that the incidents which occurred had radiation doses within the stochastic effects reference interval established by the ICRP-64.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Shih-Jung

    Dynamic strength of the High Flux Isotope Reactor (HFIR) vessel to resist hypothetical accidents is analyzed by using the method of fracture mechanics. Vessel critical stresses are estimated by applying dynamic pressure pulses of a range of magnitudes and pulse-durations. The pulses versus time functions are assumed to be step functions. The probability of vessel fracture is then calculated by assuming a distribution of possible surface cracks of different crack depths. The probability distribution function for the crack depths is based on the form that is recommended by the Marshall report. The toughness of the vessel steel used in themore » analysis is based on the projected and embrittled value after 10 effective full power years from 1986. From the study made by Cheverton, Merkle and Nanstad, the weakest point on the vessel for fracture evaluation is known to be located within the region surrounding the tangential beam tube HB3. The increase in the probability of fracture is obtained as an extension of the result from that report for the regular operating condition to include conditions of higher dynamic pressures due to accident loadings. The increase in the probability of vessel fracture is plotted for a range of hoop stresses to indicate the vessel strength against hypothetical accident conditions.« less

  14. Models of multidimensional discrete distribution of probabilities of random variables in information systems

    NASA Astrophysics Data System (ADS)

    Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.

    2018-03-01

    Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.

  15. Statistical characteristics of the sequential detection of signals in correlated noise

    NASA Astrophysics Data System (ADS)

    Averochkin, V. A.; Baranov, P. E.

    1985-10-01

    A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.

  16. Multifractal features in stock and foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Yoon, Seong-Min

    2004-03-01

    We investigate the tick dynamical behavior of three assets(the yen-dollar exchange rate, the won-dollar exchange rate, and the KOSPI) using the rescaled range analysis in stock and foreign exchange markets. The multifractal Hurst exponents with long-run memory effects can be obtained from assets, and we discuss whether it exists the crossover or not for the Hurst exponents at charateristic time scales. Particularly, we find that the probability distribution of prices is approached to a Lorentz distribution, different from fat-tailed properties.

  17. Power and type I error results for a bias-correction approach recently shown to provide accurate odds ratios of genetic variants for the secondary phenotypes associated with primary diseases.

    PubMed

    Wang, Jian; Shete, Sanjay

    2011-11-01

    We recently proposed a bias correction approach to evaluate accurate estimation of the odds ratio (OR) of genetic variants associated with a secondary phenotype, in which the secondary phenotype is associated with the primary disease, based on the original case-control data collected for the purpose of studying the primary disease. As reported in this communication, we further investigated the type I error probabilities and powers of the proposed approach, and compared the results to those obtained from logistic regression analysis (with or without adjustment for the primary disease status). We performed a simulation study based on a frequency-matching case-control study with respect to the secondary phenotype of interest. We examined the empirical distribution of the natural logarithm of the corrected OR obtained from the bias correction approach and found it to be normally distributed under the null hypothesis. On the basis of the simulation study results, we found that the logistic regression approaches that adjust or do not adjust for the primary disease status had low power for detecting secondary phenotype associated variants and highly inflated type I error probabilities, whereas our approach was more powerful for identifying the SNP-secondary phenotype associations and had better-controlled type I error probabilities. © 2011 Wiley Periodicals, Inc.

  18. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  19. Review of probabilistic analysis of dynamic response of systems with random parameters

    NASA Technical Reports Server (NTRS)

    Kozin, F.; Klosner, J. M.

    1989-01-01

    The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.

  20. Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    2016-12-01

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  1. New Possibilities of Positron-Emission Tomography

    NASA Astrophysics Data System (ADS)

    Volobuev, A. N.

    2018-01-01

    The reasons for the emergence of the angular distribution of photons generated as a result of annihilation of an electron and a positron in a positron-emission tomograph are investigated. It is shown that the angular distribution of the radiation intensity (i.e., the probability of photon emission at different angles) is a consequence of the Doppler effect in the center-of-mass reference system of the electron and the positron. In the reference frame attached to the electron, the angular distribution of the number of emitted photons does not exists but is replaced by the Doppler shift of the frequency of photons. The results obtained in this study make it possible to extend the potentialities of the positron-emission tomograph in the diagnostics of diseases and to obtain additional mechanical characteristics of human tissues, such as density and viscosity.

  2. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.

  3. MaxEnt alternatives to pearson family distributions

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie J.

    2012-05-01

    In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.

  4. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  5. Mean, covariance, and effective dimension of stochastic distributed delay dynamics

    NASA Astrophysics Data System (ADS)

    René, Alexandre; Longtin, André

    2017-11-01

    Dynamical models are often required to incorporate both delays and noise. However, the inherently infinite-dimensional nature of delay equations makes formal solutions to stochastic delay differential equations (SDDEs) challenging. Here, we present an approach, similar in spirit to the analysis of functional differential equations, but based on finite-dimensional matrix operators. This results in a method for obtaining both transient and stationary solutions that is directly amenable to computation, and applicable to first order differential systems with either discrete or distributed delays. With fewer assumptions on the system's parameters than other current solution methods and no need to be near a bifurcation, we decompose the solution to a linear SDDE with arbitrary distributed delays into natural modes, in effect the eigenfunctions of the differential operator, and show that relatively few modes can suffice to approximate the probability density of solutions. Thus, we are led to conclude that noise makes these SDDEs effectively low dimensional, which opens the possibility of practical definitions of probability densities over their solution space.

  6. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  7. Aggregate and individual replication probability within an explicit model of the research process.

    PubMed

    Miller, Jeff; Schwarz, Wolf

    2011-09-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.

  8. Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles.

    PubMed

    Bhatnagar, Akshay; Gustavsson, K; Mitra, Dhrubaditya

    2018-02-01

    We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component V_{R} for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D_{2}. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011)PLEEE81539-375510.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014)1468-524810.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) |V_{R}|≪R, where the distribution depends solely on R, and (2) |V_{R}|≫R, where the distribution is a function of |V_{R}| alone. The probability distributions in these two regimes are matched along a straight line: |V_{R}|=z^{*}R. Our simulations confirm that this is indeed correct. We further obtain D_{2} and z^{*} as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.

  9. Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Akshay; Gustavsson, K.; Mitra, Dhrubaditya

    2018-02-01

    We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component VR for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D2. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011), 10.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014), 10.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) | VR|≪R , where the distribution depends solely on R , and (2) | VR|≫R , where the distribution is a function of | VR| alone. The probability distributions in these two regimes are matched along a straight line: | VR|= z*R . Our simulations confirm that this is indeed correct. We further obtain D2 and z* as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.

  10. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  11. Quantum Theory of Wormholes

    NASA Astrophysics Data System (ADS)

    González-Díaz, Pedro F.

    We re-explore the effects of multiply-connected wormholes on ordinary matter at low energies. It is obtained that the path integral that describes these effects is given in terms of a Planckian probability distribution for the Coleman α-parameters, rather than a classical Gaussian distribution law. This implies that the path integral over all low-energy fields with the wormhole effective interactions can no longer vary continuously, and that the quantities α2 are interpretable as the momenta of a quantum field. Using the new result that, rather than being given in terms of the Coleman-Hawking probability, the Euclidean action must equal negative entropy, the model predicts a very small but still nonzero cosmological constant and quite reasonable values for the pion and neutrino masses. The divergence problems of Euclidean quantum gravity are also discussed in the light of the above results.

  12. A Rational Approach to Determine Minimum Strength Thresholds in Novel Structural Materials

    NASA Technical Reports Server (NTRS)

    Schur, Willi W.; Bilen, Canan; Sterling, Jerry

    2003-01-01

    Design of safe and survivable structures requires the availability of guaranteed minimum strength thresholds for structural materials to enable a meaningful comparison of strength requirement and available strength. This paper develops a procedure for determining such a threshold with a desired degree of confidence, for structural materials with none or minimal industrial experience. The problem arose in attempting to use a new, highly weight-efficient structural load tendon material to achieve a lightweight super-pressure balloon. The developed procedure applies to lineal (one dimensional) structural elements. One important aspect of the formulation is that it extrapolates to expected probability distributions for long length specimen samples from some hypothesized probability distribution that has been obtained from a shorter length specimen sample. The use of the developed procedure is illustrated using both real and simulated data.

  13. Exploring activity-driven network with biased walks

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Wu, Ding Juan; Lv, Fang; Su, Meng Long

    We investigate the concurrent dynamics of biased random walks and the activity-driven network, where the preferential transition probability is in terms of the edge-weighting parameter. We also obtain the analytical expressions for stationary distribution and the coverage function in directed and undirected networks, all of which depend on the weight parameter. Appropriately adjusting this parameter, more effective search strategy can be obtained when compared with the unbiased random walk, whether in directed or undirected networks. Since network weights play a significant role in the diffusion process.

  14. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  15. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  16. XID+: Next generation XID development

    NASA Astrophysics Data System (ADS)

    Hurley, Peter

    2017-04-01

    XID+ is a prior-based source extraction tool which carries out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. It uses a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.

  17. Distribution of injected power fluctuations in electroconvection.

    PubMed

    Tóth-Katona, Tibor; Gleeson, J T

    2003-12-31

    We report on the distribution spectra of the fluctations in the amount of power injected into a liquid crystal undergoing electroconvective flow. The probability distribution functions (PDFs) of the fluc-tuations as well as the magnitude of the fluctuations have been determined in a wide range of imposed stress both for unconfined and confined flow geometries. These spectra are compared to those found in other systems held far from equilibrium, and find that in certain conditions we obtain the universal PDF form reported by Phys. Rev. Lett. 84, 3744 (2000)]. Moreover, the PDF approaches this universal form via an interesting mechanism whereby the distribution's negative tail evolves towards form in a different manner than the positive tail.

  18. Optimized lower leg injury probability curves from postmortem human subject tests under axial impacts.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko

    2014-01-01

    Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.

  19. A global logrank test for adaptive treatment strategies based on observational studies.

    PubMed

    Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara

    2014-02-28

    In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis

    PubMed Central

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed. PMID:26167524

  1. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.

    PubMed

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.

  2. Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions

    PubMed Central

    Marinelli, Fabrizio; Faraldo-Gómez, José D.

    2015-01-01

    We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917

  3. M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey

    USGS Publications Warehouse

    Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.

    2016-01-01

    We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.

  4. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  5. On the optimal identification of tag sets in time-constrained RFID configurations.

    PubMed

    Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel

    2011-01-01

    In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.

  6. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  8. The Impact of the Geometrical Structure of the DNA on Parameters of the Track-Event Theory for Radiation Induced Cell Kill.

    PubMed

    Schneider, Uwe; Vasi, Fabiano; Besserer, Jürgen

    2016-01-01

    When fractionation schemes for hypofractionation and stereotactic body radiotherapy are considered, a reliable cell survival model at high dose is needed for calculating doses of similar biological effectiveness. An alternative to the LQ-model is the track-event theory which is based on the probabilities for one- and two two-track events. A one-track-event (OTE) is always represented by at least two simultaneous double strand breaks. A two-track-event (TTE) results in one double strand break. Therefore at least two two-track-events on the same or different chromosomes are necessary to produce an event which leads to cell sterilization. It is obvious that the probabilities of OTEs and TTEs must somehow depend on the geometrical structure of the chromatin. In terms of the track-event theory the ratio ε of the probabilities of OTEs and TTEs includes the geometrical dependence and is obtained in this work by simple Monte Carlo simulations. For this work it was assumed that the anchors of loop forming chromatin are most sensitive to radiation induced cell deaths. Therefore two adjacent tetranucleosomes representing the loop anchors were digitized. The probability ratio ε of OTEs and TTEs was factorized into a radiation quality dependent part and a geometrical part: ε = εion ∙ εgeo. εgeo was obtained for two situations, by applying Monte Carlo simulation for DNA on the tetranucleosomes itself and for linker DNA. Low energy electrons were represented by randomly distributed ionizations and high energy electrons by ionizations which were simulated on rays. εion was determined for electrons by using results from nanodosimetric measurements. The calculated ε was compared to the ε obtained from fits of the track event model to 42 sets of experimental human cell survival data. When the two tetranucleosomes are in direct contact and the hits are randomly distributed εgeo and ε are 0.12 and 0.85, respectively. When the hits are simulated on rays εgeo and ε are 0.10 and 0.71. For the linker-DNA εgeo and ε for randomly distributed hits are 0.010 and 0.073, and for hits on rays 0.0058 and 0.041, respectively. The calculated ε fits the experimentally obtained ε = 0.64±0.32 best for hits on the tetranucleosome when they are close to each other both, for high and low energy electrons. The parameter εgeo of the track event model was obtained by pure geometrical considerations of the chromatin structure and is 0.095 ± 0.022. It can be used as a fixed parameter in the track-event theory.

  9. Modelling the spatial distribution of Fasciola hepatica in dairy cattle in Europe.

    PubMed

    Ducheyne, Els; Charlier, Johannes; Vercruysse, Jozef; Rinaldi, Laura; Biggeri, Annibale; Demeler, Janina; Brandt, Christina; De Waal, Theo; Selemetas, Nikolaos; Höglund, Johan; Kaba, Jaroslaw; Kowalczyk, Slawomir J; Hendrickx, Guy

    2015-03-26

    A harmonized sampling approach in combination with spatial modelling is required to update current knowledge of fasciolosis in dairy cattle in Europe. Within the scope of the EU project GLOWORM, samples from 3,359 randomly selected farms in 849 municipalities in Belgium, Germany, Ireland, Poland and Sweden were collected and their infection status assessed using an indirect bulk tank milk (BTM) enzyme-linked immunosorbent assay (ELISA). Dairy farms were considered exposed when the optical density ratio (ODR) exceeded the 0.3 cut-off. Two ensemble-modelling techniques, Random Forests (RF) and Boosted Regression Trees (BRT), were used to obtain the spatial distribution of the probability of exposure to Fasciola hepatica using remotely sensed environmental variables (1-km spatial resolution) and interpolated values from meteorological stations as predictors. The median ODRs amounted to 0.31, 0.12, 0.54, 0.25 and 0.44 for Belgium, Germany, Ireland, Poland and southern Sweden, respectively. Using the 0.3 threshold, 571 municipalities were categorized as positive and 429 as negative. RF was seen as capable of predicting the spatial distribution of exposure with an area under the receiver operation characteristic (ROC) curve (AUC) of 0.83 (0.96 for BRT). Both models identified rainfall and temperature as the most important factors for probability of exposure. Areas of high and low exposure were identified by both models, with BRT better at discriminating between low-probability and high-probability exposure; this model may therefore be more useful in practise. Given a harmonized sampling strategy, it should be possible to generate robust spatial models for fasciolosis in dairy cattle in Europe to be used as input for temporal models and for the detection of deviations in baseline probability. Further research is required for model output in areas outside the eco-climatic range investigated.

  10. Structure of velocity distributions in shock waves in granular gases with extension to molecular gases.

    PubMed

    Vilquin, A; Boudet, J F; Kellay, H

    2016-08-01

    Velocity distributions in normal shock waves obtained in dilute granular flows are studied. These distributions cannot be described by a simple functional shape and are believed to be bimodal. Our results show that these distributions are not strictly bimodal but a trimodal distribution is shown to be sufficient. The usual Mott-Smith bimodal description of these distributions, developed for molecular gases, and based on the coexistence of two subpopulations (a supersonic and a subsonic population) in the shock front, can be modified by adding a third subpopulation. Our experiments show that this additional population results from collisions between the supersonic and subsonic subpopulations. We propose a simple approach incorporating the role of this third intermediate population to model the measured probability distributions and apply it to granular shocks as well as shocks in molecular gases.

  11. Derivation of Failure Rates and Probability of Failures for the International Space Station Probabilistic Risk Assessment Study

    NASA Technical Reports Server (NTRS)

    Vitali, Roberto; Lutomski, Michael G.

    2004-01-01

    National Aeronautics and Space Administration s (NASA) International Space Station (ISS) Program uses Probabilistic Risk Assessment (PRA) as part of its Continuous Risk Management Process. It is used as a decision and management support tool to not only quantify risk for specific conditions, but more importantly comparing different operational and management options to determine the lowest risk option and provide rationale for management decisions. This paper presents the derivation of the probability distributions used to quantify the failure rates and the probability of failures of the basic events employed in the PRA model of the ISS. The paper will show how a Bayesian approach was used with different sources of data including the actual ISS on orbit failures to enhance the confidence in results of the PRA. As time progresses and more meaningful data is gathered from on orbit failures, an increasingly accurate failure rate probability distribution for the basic events of the ISS PRA model can be obtained. The ISS PRA has been developed by mapping the ISS critical systems such as propulsion, thermal control, or power generation into event sequences diagrams and fault trees. The lowest level of indenture of the fault trees was the orbital replacement units (ORU). The ORU level was chosen consistently with the level of statistically meaningful data that could be obtained from the aerospace industry and from the experts in the field. For example, data was gathered for the solenoid valves present in the propulsion system of the ISS. However valves themselves are composed of parts and the individual failure of these parts was not accounted for in the PRA model. In other words the failure of a spring within a valve was considered a failure of the valve itself.

  12. Role of fish distribution on estimates of standing crop in a cooling reservoir

    USGS Publications Warehouse

    Barwick, D. Hugh

    1984-01-01

    Estimates of fish standing crop from coves in Keowee Reservoir, South Carolina, were obtained in May and August for 3 consecutive years. Estimates were significantly higher in May than in August for most of the major species of fish collected, suggesting that considerable numbers of fish had migrated from the coves by August. This change in fish distribution may have resulted from the operation of a 2,580-megawatt nuclear power plant which altered reservoir stratification. Because fish distribution is sensitive to conditions of reservoir stratification, and because power plants often alter reservoir stratification, annual cove sampling in August may not be sufficient to produce comparable estimates of fish standing crop on which to assess the impact of power plant operations on fish populations. Comparable estimates of fish standing crop can probably be obtained from cooling reservoirs by collecting annual samples at similar water temperatures and concentrations of dissolved oxygen.

  13. Photodissociation of ethylbenzene and n-propylbenzene in a molecular beam

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Liang; Jiang, Jyh-Chiang; Lee, Yuan T.; Ni, Chi-Kung

    2002-10-01

    The photodissociation of jet-cooled ethylbenzene and n-propylbenzene at both 193 and 248 nm was studied using vacuum ultraviolet photoionization/multimass ion imaging techniques. The photofragment translational energy distributions from both the molecules obtained at 193 nm show that the probability of portioning energy to product translational energy decreases monotonically with increasing translational energy. They indicate that the dissociation occurs from the ground electronic state. However, the photofragment translational energy distributions from both molecules obtained at 248 nm contain a fast and a slow component. 75% of ethylbenzene and 80% of n-propylbenzene following the 248 nm photoexcitation dissociate from electronic excited state, resulting in the fast component. The remaining 25% of ethylbenzene and 20% of n-propylbenzene dissociate through the ground electronic state, giving rise to the slow component. A comparison with an ab initio calculation suggests that the dissociation from the first triplet state corresponds to the fast component in translational energy distribution.

  14. Prediction of Mean and Design Fatigue Lives of Self Compacting Concrete Beams in Flexure

    NASA Astrophysics Data System (ADS)

    Goel, S.; Singh, S. P.; Singh, P.; Kaushik, S. K.

    2012-02-01

    In this paper, result of an investigation conducted to study the flexural fatigue characteristics of self compacting concrete (SCC) beams in flexure are presented. An experimental programme was planned in which approximately 60 SCC beam specimens of size 100 × 100 × 500 mm were tested under flexural fatigue loading. Approximately 45 static flexural tests were also conducted to facilitate fatigue testing. The flexural fatigue and static flexural strength tests were conducted on a 100 kN servo-controlled actuator. The fatigue life data thus obtained have been used to establish the probability distributions of fatigue life of SCC using two-parameter Weibull distribution. The parameters of the Weibull distribution have been obtained by different methods of analysis. Using the distribution parameters, the mean and design fatigue lives of SCC have been estimated and compared with Normally vibrated concrete (NVC), the data for which have been taken from literature. It has been observed that SCC exhibits higher mean and design fatigue lives compared to NVC.

  15. The distribution of stars most likely to harbor intelligent life.

    PubMed

    Whitmire, Daniel P; Matese, John J

    2009-09-01

    Simple heuristic models and recent numerical simulations show that the probability of habitable planet formation increases with stellar mass. We combine those results with the distribution of main-sequence stellar masses to obtain the distribution of stars most likely to possess habitable planets as a function of stellar lifetime. We then impose the self-selection condition that intelligent observers can only find themselves around a star with a lifetime greater than the time required for that observer to have evolved, T(i). This allows us to obtain the stellar timescale number distribution for a given value of T(i). Our results show that for habitable planets with a civilization that evolved at time T(i) = 4.5 Gyr the median stellar lifetime is 13 Gyr, corresponding approximately to a stellar type of G5, with two-thirds of the stars having lifetimes between 7 and 30 Gyr, corresponding approximately to spectral types G0-K5. For other values of T(i) the median stellar lifetime changes by less than 50%.

  16. Distribution of mean Doppler shift, spectral width, and skewness of coherent 50-MHz auroral radar backscatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watermann, J.; McNamara, A.G.; Sofko, G.J.

    Some 7,700 radio aurora spectra obtained from a six link 50-MHz CW radar network set up on the Canadian prairies were analyzed with respect to the distributions of mean Doppler shift, spectral width and skewness. A comparison with recently published SABRE results obtained at 153 MHz shows substantial differences in the distributions which are probably due to different experimental and geophysical conditions. The spectra are mostly broad with mean Doppler shifts close to zero (type II spectra). The typical groupings of type I and type III spectra are clearly identified. All types appear to be in general much more symmetricmore » than those recorded with SABRE, and the skewness is only weakly dependent on the sign of the mean Doppler shift. Its distribution peaks near zero and shows a weak positive correlation with the type II Doppler shifts while the mostly positive type I Doppler shifts are slightly negatively correlated with the skewness.« less

  17. Synthesis and characterization of magnetic poly(divinyl benzene)/Fe3O4, C/Fe3O4/Fe, and C/Fe onionlike fullerene micrometer-sized particles with a narrow size distribution.

    PubMed

    Snovski, Ron; Grinblat, Judith; Margel, Shlomo

    2011-09-06

    Magnetic poly(divinyl benzene)/Fe(3)O(4) microspheres with a narrow size distribution were produced by entrapping the iron pentacarbonyl precursor within the pores of uniform porous poly(divinyl benzene) microspheres prepared in our laboratory, followed by the decomposition in a sealed cell of the entrapped Fe(CO)(5) particles at 300 °C under an inert atmosphere. Magnetic onionlike fullerene microspheres with a narrow size distribution were produced by annealing the obtained PDVB/Fe(3)O(4) particles at 500, 600, 800, and 1100 °C, respectively, under an inert atmosphere. The formation of carbon graphitic layers at low temperatures such as 500 °C is unique and probably obtained because of the presence of the magnetic iron nanoparticles. The annealing temperature allowed control of the composition, size, size distribution, crystallinity, porosity, and magnetic properties of the produced magnetic microspheres. © 2011 American Chemical Society

  18. Dynamic phase transitions of the Blume-Emery-Griffiths model under an oscillating external magnetic field by the path probability method

    NASA Astrophysics Data System (ADS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-03-01

    By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume-Emery-Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.

  19. A computational framework to empower probabilistic protein design

    PubMed Central

    Fromer, Menachem; Yanover, Chen

    2008-01-01

    Motivation: The task of engineering a protein to perform a target biological function is known as protein design. A commonly used paradigm casts this functional design problem as a structural one, assuming a fixed backbone. In probabilistic protein design, positional amino acid probabilities are used to create a random library of sequences to be simultaneously screened for biological activity. Clearly, certain choices of probability distributions will be more successful in yielding functional sequences. However, since the number of sequences is exponential in protein length, computational optimization of the distribution is difficult. Results: In this paper, we develop a computational framework for probabilistic protein design following the structural paradigm. We formulate the distribution of sequences for a structure using the Boltzmann distribution over their free energies. The corresponding probabilistic graphical model is constructed, and we apply belief propagation (BP) to calculate marginal amino acid probabilities. We test this method on a large structural dataset and demonstrate the superiority of BP over previous methods. Nevertheless, since the results obtained by BP are far from optimal, we thoroughly assess the paradigm using high-quality experimental data. We demonstrate that, for small scale sub-problems, BP attains identical results to those produced by exact inference on the paradigmatic model. However, quantitative analysis shows that the distributions predicted significantly differ from the experimental data. These findings, along with the excellent performance we observed using BP on the smaller problems, suggest potential shortcomings of the paradigm. We conclude with a discussion of how it may be improved in the future. Contact: fromer@cs.huji.ac.il PMID:18586717

  20. Nuclear Ensemble Approach with Importance Sampling.

    PubMed

    Kossoski, Fábris; Barbatti, Mario

    2018-06-12

    We show that the importance sampling technique can effectively augment the range of problems where the nuclear ensemble approach can be applied. A sampling probability distribution function initially determines the collection of initial conditions for which calculations are performed, as usual. Then, results for a distinct target distribution are computed by introducing compensating importance sampling weights for each sampled point. This mapping between the two probability distributions can be performed whenever they are both explicitly constructed. Perhaps most notably, this procedure allows for the computation of temperature dependent observables. As a test case, we investigated the UV absorption spectra of phenol, which has been shown to have a marked temperature dependence. Application of the proposed technique to a range that covers 500 K provides results that converge to those obtained with conventional sampling. We further show that an overall improved rate of convergence is obtained when sampling is performed at intermediate temperatures. The comparison between calculated and the available measured cross sections is very satisfactory, as the main features of the spectra are correctly reproduced. As a second test case, one of Tully's classical models was revisited, and we show that the computation of dynamical observables also profits from the importance sampling technique. In summary, the strategy developed here can be employed to assess the role of temperature for any property calculated within the nuclear ensemble method, with the same computational cost as doing so for a single temperature.

  1. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  2. A moment-convergence method for stochastic analysis of biochemical reaction networks.

    PubMed

    Zhang, Jiajun; Nie, Qing; Zhou, Tianshou

    2016-05-21

    Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.

  3. Application of Bayesian Reliability Concepts to Cruise Missile Electronic Components

    DTIC Science & Technology

    1989-09-01

    and contrast them with the more prevalent classical inference view. 3 II. literature Review Introduction This literature review will consider current ...events on the basis of whatever evidence is currently available. Then if additional evidence is subsequently obtained, the initial probabilities are...Chay contends there is no longer any need to approximate continuous prior distributions through discretization because current computer calculations

  4. A simulation of probabilistic wildfire risk components for the continental United States

    Treesearch

    Mark A. Finney; Charles W. McHugh; Isaac C. Grenfell; Karin L. Riley; Karen C. Short

    2011-01-01

    This simulation research was conducted in order to develop a large-fire risk assessment system for the contiguous land area of the United States. The modeling system was applied to each of 134 Fire Planning Units (FPUs) to estimate burn probabilities and fire size distributions. To obtain stable estimates of these quantities, fire ignition and growth was simulated for...

  5. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  6. Fusion-fission Study at JAEA for Heavy-element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.

    Fission fragment mass distributions were measured in the heavy-ion induced fission using 238U target nucleus. The mass distribu- tions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their inci- dent energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that the sub-barrier energies can be used for heavy element synthesis.

  7. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  8. Disentangling rotational velocity distribution of stars

    NASA Astrophysics Data System (ADS)

    Curé, Michel; Rial, Diego F.; Cassetti, Julia; Christen, Alejandra

    2017-11-01

    Rotational speed is an important physical parameter of stars: knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. However, rotational speed cannot be measured directly and is instead the convolution between the rotational speed and the sine of the inclination angle vsin(i). The problem itself can be described via a Fredhoml integral of the first kind. A new method (Curé et al. 2014) to deconvolve this inverse problem and obtain the cumulative distribution function for stellar rotational velocities is based on the work of Chandrasekhar & Münch (1950). Another method to obtain the probability distribution function is Tikhonov regularization method (Christen et al. 2016). The proposed methods can be also applied to the mass ratio distribution of extrasolar planets and brown dwarfs (in binary systems, Curé et al. 2015). For stars in a cluster, where all members are gravitationally bounded, the standard assumption that rotational axes are uniform distributed over the sphere is questionable. On the basis of the proposed techniques a simple approach to model this anisotropy of rotational axes has been developed with the possibility to ``disentangling'' simultaneously both the rotational speed distribution and the orientation of rotational axes.

  9. Measurement of the Errors of Service Altimeter Installations During Landing-Approach and Take-Off Operations

    NASA Technical Reports Server (NTRS)

    Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.

    1960-01-01

    The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.

  10. Prediction of future asset prices

    NASA Astrophysics Data System (ADS)

    Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei

    2014-12-01

    This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.

  11. Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference.

    PubMed

    Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing

    2016-01-01

    In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications.

  12. Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference

    PubMed Central

    Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing

    2016-01-01

    In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications. PMID:27851817

  13. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.

  14. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    NASA Astrophysics Data System (ADS)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.

  15. Statistical Significance of Periodicity and Log-Periodicity with Heavy-Tailed Correlated Noise

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    We estimate the probability that random noise, of several plausible standard distributions, creates a false alarm that a periodicity (or log-periodicity) is found in a time series. The solution of this problem is already known for independent Gaussian distributed noise. We investigate more general situations with non-Gaussian correlated noises and present synthetic tests on the detectability and statistical significance of periodic components. A periodic component of a time series is usually detected by some sort of Fourier analysis. Here, we use the Lomb periodogram analysis, which is suitable and outperforms Fourier transforms for unevenly sampled time series. We examine the false-alarm probability of the largest spectral peak of the Lomb periodogram in the presence of power-law distributed noises, of short-range and of long-range fractional-Gaussian noises. Increasing heavy-tailness (respectively correlations describing persistence) tends to decrease (respectively increase) the false-alarm probability of finding a large spurious Lomb peak. Increasing anti-persistence tends to decrease the false-alarm probability. We also study the interplay between heavy-tailness and long-range correlations. In order to fully determine if a Lomb peak signals a genuine rather than a spurious periodicity, one should in principle characterize the Lomb peak height, its width and its relations to other peaks in the complete spectrum. As a step towards this full characterization, we construct the joint-distribution of the frequency position (relative to other peaks) and of the height of the highest peak of the power spectrum. We also provide the distributions of the ratio of the highest Lomb peak to the second highest one. Using the insight obtained by the present statistical study, we re-examine previously reported claims of ``log-periodicity'' and find that the credibility for log-periodicity in 2D-freely decaying turbulence is weakened while it is strengthened for fracture, for the ion-signature prior to the Kobe earthquake and for financial markets.

  16. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.

  17. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions

    PubMed Central

    Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas

    2015-01-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191

  18. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  19. Predicting species distributions from checklist data using site-occupancy models

    USGS Publications Warehouse

    Kery, M.; Gardner, B.; Monnerat, C.

    2010-01-01

    Aim: (1) To increase awareness of the challenges induced by imperfect detection, which is a fundamental issue in species distribution modelling; (2) to emphasize the value of replicate observations for species distribution modelling; and (3) to show how 'cheap' checklist data in faunal/floral databases may be used for the rigorous modelling of distributions by site-occupancy models. Location: Switzerland. Methods: We used checklist data collected by volunteers during 1999 and 2000 to analyse the distribution of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly in Switzerland. We used data from repeated visits to 1-ha pixels to derive 'detection histories' and apply site-occupancy models to estimate the 'true' species distribution, i.e. corrected for imperfect detection. We modelled blue hawker distribution as a function of elevation and year and its detection probability of elevation, year and season. Results: The best model contained cubic polynomial elevation effects for distribution and quadratic effects of elevation and season for detectability. We compared the site-occupancy model with a conventional distribution model based on a generalized linear model, which assumes perfect detectability (p = 1). The conventional distribution map looked very different from the distribution map obtained using site-occupancy models that accounted for the imperfect detection. The conventional model underestimated the species distribution by 60%, and the slope parameters of the occurrence-elevation relationship were also underestimated when assuming p = 1. Elevation was not only an important predictor of blue hawker occurrence, but also of the detection probability, with a bell-shaped relationship. Furthermore, detectability increased over the season. The average detection probability was estimated at only 0.19 per survey. Main conclusions: Conventional species distribution models do not model species distributions per se but rather the apparent distribution, i.e. an unknown proportion of species distributions. That unknown proportion is equivalent to detectability. Imperfect detection in conventional species distribution models yields underestimates of the extent of distributions and covariate effects that are biased towards zero. In addition, patterns in detectability will erroneously be ascribed to species distributions. In contrast, site-occupancy models applied to replicated detection/non-detection data offer a powerful framework for making inferences about species distributions corrected for imperfect detection. The use of 'cheap' checklist data greatly enhances the scope of applications of this useful class of models. ?? 2010 Blackwell Publishing Ltd.

  20. Distribution of Base Pair Alternations in a Periodic DNA Chain: Application of Pólya Counting to a Physical System

    NASA Astrophysics Data System (ADS)

    Hillebrand, Malcolm; Paterson-Jones, Guy; Kalosakas, George; Skokos, Charalampos

    2018-03-01

    In modeling DNA chains, the number of alternations between Adenine-Thymine (AT) and Guanine-Cytosine (GC) base pairs can be considered as a measure of the heterogeneity of the chain, which in turn could affect its dynamics. A probability distribution function of the number of these alternations is derived for circular or periodic DNA. Since there are several symmetries to account for in the periodic chain, necklace counting methods are used. In particular, Polya's Enumeration Theorem is extended for the case of a group action that preserves partitioned necklaces. This, along with the treatment of generating functions as formal power series, allows for the direct calculation of the number of possible necklaces with a given number of AT base pairs, GC base pairs and alternations. The theoretically obtained probability distribution functions of the number of alternations are accurately reproduced by Monte Carlo simulations and fitted by Gaussians. The effect of the number of base pairs on the characteristics of these distributions is also discussed, as well as the effect of the ratios of the numbers of AT and GC base pairs.

  1. Statistical thermodynamics of clustered populations.

    PubMed

    Matsoukas, Themis

    2014-08-01

    We present a thermodynamic theory for a generic population of M individuals distributed into N groups (clusters). We construct the ensemble of all distributions with fixed M and N, introduce a selection functional that embodies the physics that governs the population, and obtain the distribution that emerges in the scaling limit as the most probable among all distributions consistent with the given physics. We develop the thermodynamics of the ensemble and establish a rigorous mapping to regular thermodynamics. We treat the emergence of a so-called giant component as a formal phase transition and show that the criteria for its emergence are entirely analogous to the equilibrium conditions in molecular systems. We demonstrate the theory by an analytic model and confirm the predictions by Monte Carlo simulation.

  2. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  3. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  4. Almost all quantum channels are equidistant

    NASA Astrophysics Data System (ADS)

    Nechita, Ion; Puchała, Zbigniew; Pawela, Łukasz; Życzkowski, Karol

    2018-05-01

    In this work, we analyze properties of generic quantum channels in the case of large system size. We use random matrix theory and free probability to show that the distance between two independent random channels converges to a constant value as the dimension of the system grows larger. As a measure of the distance we use the diamond norm. In the case of a flat Hilbert-Schmidt distribution on quantum channels, we obtain that the distance converges to 1/2 +2/π , giving also an estimate for the maximum success probability for distinguishing the channels. We also consider the problem of distinguishing two random unitary rotations.

  5. Quantum States and Generalized Observables: A Simple Proof of Gleason's Theorem

    NASA Astrophysics Data System (ADS)

    Busch, P.

    2003-09-01

    A quantum state can be understood in a loose sense as a map that assigns a value to every observable. Formalizing this characterization of states in terms of generalized probability distributions on the set of effects, we obtain a simple proof of the result, analogous to Gleason’s theorem, that any quantum state is given by a density operator. As a corollary we obtain a vonNeumann type argument against noncontextual hidden variables. It follows that on an individual interpretation of quantum mechanics the values of effects are appropriately understood as propensities.

  6. Bouchaud-Mézard model on a random network

    NASA Astrophysics Data System (ADS)

    Ichinomiya, Takashi

    2012-09-01

    We studied the Bouchaud-Mézard (BM) model, which was introduced to explain Pareto's law in a real economy, on a random network. Using “adiabatic and independent” assumptions, we analytically obtained the stationary probability distribution function of wealth. The results show that wealth condensation, indicated by the divergence of the variance of wealth, occurs at a larger J than that obtained by the mean-field theory, where J represents the strength of interaction between agents. We compared our results with numerical simulation results and found that they were in good agreement.

  7. Detection of non-Gaussian fluctuations in a quantum point contact.

    PubMed

    Gershon, G; Bomze, Yu; Sukhorukov, E V; Reznikov, M

    2008-07-04

    An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.

  8. Detection of Non-Gaussian Fluctuations in a Quantum Point Contact

    NASA Astrophysics Data System (ADS)

    Gershon, G.; Bomze, Yu.; Sukhorukov, E. V.; Reznikov, M.

    2008-07-01

    An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.

  9. Reconstructing missing information on precipitation datasets: impact of tails on adopted statistical distributions.

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Beckie, Roger Daniel

    2014-05-01

    Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were first tested on synthetic examples, to have a complete control of the impact of several variables such as minimum amount of data required to obtain reliable statistical distributions from the selected parametric functions. Then, we applied the methodology to precipitation datasets collected in the Vancouver area and on a mining site in Peru.

  10. Bayesian network representing system dynamics in risk analysis of nuclear systems

    NASA Astrophysics Data System (ADS)

    Varuttamaseni, Athi

    2011-12-01

    A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.

  11. Propensity scores-potential outcomes framework to incorporate severity probabilities in the highway safety manual crash prediction algorithm.

    PubMed

    Sasidharan, Lekshmi; Donnell, Eric T

    2014-10-01

    Accurate estimation of the expected number of crashes at different severity levels for entities with and without countermeasures plays a vital role in selecting countermeasures in the framework of the safety management process. The current practice is to use the American Association of State Highway and Transportation Officials' Highway Safety Manual crash prediction algorithms, which combine safety performance functions and crash modification factors, to estimate the effects of safety countermeasures on different highway and street facility types. Many of these crash prediction algorithms are based solely on crash frequency, or assume that severity outcomes are unchanged when planning for, or implementing, safety countermeasures. Failing to account for the uncertainty associated with crash severity outcomes, and assuming crash severity distributions remain unchanged in safety performance evaluations, limits the utility of the Highway Safety Manual crash prediction algorithms in assessing the effect of safety countermeasures on crash severity. This study demonstrates the application of a propensity scores-potential outcomes framework to estimate the probability distribution for the occurrence of different crash severity levels by accounting for the uncertainties associated with them. The probability of fatal and severe injury crash occurrence at lighted and unlighted intersections is estimated in this paper using data from Minnesota. The results show that the expected probability of occurrence of fatal and severe injury crashes at a lighted intersection was 1 in 35 crashes and the estimated risk ratio indicates that the respective probabilities at an unlighted intersection was 1.14 times higher compared to lighted intersections. The results from the potential outcomes-propensity scores framework are compared to results obtained from traditional binary logit models, without application of propensity scores matching. Traditional binary logit analysis suggests that the probability of occurrence of severe injury crashes is higher at lighted intersections compared to unlighted intersections, which contradicts the findings obtained from the propensity scores-potential outcomes framework. This finding underscores the importance of having comparable treated and untreated entities in traffic safety countermeasure evaluations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Using rank-order geostatistics for spatial interpolation of highly skewed data in a heavy-metal contaminated site.

    PubMed

    Juang, K W; Lee, D Y; Ellsworth, T R

    2001-01-01

    The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.

  13. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  14. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    PubMed

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  15. The global impact distribution of Near-Earth objects

    NASA Astrophysics Data System (ADS)

    Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.

    2016-02-01

    Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.

  16. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  17. QKD-based quantum private query without a failure probability

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Gao, Fei; Huang, Wei; Wen, QiaoYan

    2015-10-01

    In this paper, we present a quantum-key-distribution (QKD)-based quantum private query (QPQ) protocol utilizing single-photon signal of multiple optical pulses. It maintains the advantages of the QKD-based QPQ, i.e., easy to implement and loss tolerant. In addition, different from the situations in the previous QKD-based QPQ protocols, in our protocol, the number of the items an honest user will obtain is always one and the failure probability is always zero. This characteristic not only improves the stability (in the sense that, ignoring the noise and the attack, the protocol would always succeed), but also benefits the privacy of the database (since the database will no more reveal additional secrets to the honest users). Furthermore, for the user's privacy, the proposed protocol is cheat sensitive, and for security of the database, we obtain an upper bound for the leaked information of the database in theory.

  18. Probabilistic track coverage in cooperative sensor networks.

    PubMed

    Ferrari, Silvia; Zhang, Guoxian; Wettergren, Thomas A

    2010-12-01

    The quality of service of a network performing cooperative track detection is represented by the probability of obtaining multiple elementary detections over time along a target track. Recently, two different lines of research, namely, distributed-search theory and geometric transversals, have been used in the literature for deriving the probability of track detection as a function of random and deterministic sensors' positions, respectively. In this paper, we prove that these two approaches are equivalent under the same problem formulation. Also, we present a new performance function that is derived by extending the geometric-transversal approach to the case of random sensors' positions using Poisson flats. As a result, a unified approach for addressing track detection in both deterministic and probabilistic sensor networks is obtained. The new performance function is validated through numerical simulations and is shown to bring about considerable computational savings for both deterministic and probabilistic sensor networks.

  19. Noise-induced multistability in the regulation of cancer by genes and pseudogenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrosyan, K. G., E-mail: pkaren@phys.sinica.edu.tw; Hu, Chin-Kun, E-mail: huck@phys.sinica.edu.tw; National Center for Theoretical Sciences, National Tsing Hua University, Hsinchu 30013, Taiwan

    2016-07-28

    We extend a previously introduced model of stochastic gene regulation of cancer to a nonlinear case having both gene and pseudogene messenger RNAs (mRNAs) self-regulated. The model consists of stochastic Boolean genetic elements and possesses noise-induced multistability (multimodality). We obtain analytical expressions for probabilities for the case of constant but finite number of microRNA molecules which act as a noise source for the competing gene and pseudogene mRNAs. The probability distribution functions display both the global bistability regime as well as even-odd number oscillations for a certain range of model parameters. Statistical characteristics of the mRNA’s level fluctuations are evaluated.more » The obtained results of the extended model advance our understanding of the process of stochastic gene and pseudogene expressions that is crucial in regulation of cancer.« less

  20. Modeling the spatial distribution of Chagas disease vectors using environmental variables and people´s knowledge.

    PubMed

    Hernández, Jaime; Núñez, Ignacia; Bacigalupo, Antonella; Cattan, Pedro E

    2013-05-31

    Chagas disease is caused by the protozoan Trypanosoma cruzi, which is transmitted to mammal hosts by triatomine insect vectors. The goal of this study was to model the spatial distribution of triatomine species in an endemic area. Vector's locations were obtained with a rural householders' survey. This information was combined with environmental data obtained from remote sensors, land use maps and topographic SRTM data, using the machine learning algorithm Random Forests to model species distribution. We analysed the combination of variables on three scales: 10 km, 5 km and 2.5 km cell size grids. The best estimation, explaining 46.2% of the triatomines spatial distribution, was obtained for 5 km of spatial resolution. Presence probability distribution increases from central Chile towards the north, tending to cover the central-coastal region and avoiding areas of the Andes range. The methodology presented here was useful to model the distribution of triatomines in an endemic area; it is best explained using 5 km of spatial resolution, and their presence increases in the northern part of the study area. This study's methodology can be replicated in other countries with Chagas disease or other vectorial transmitted diseases, and be used to locate high risk areas and to optimize resource allocation, for prevention and control of vectorial diseases.

  1. Modeling the spatial distribution of Chagas disease vectors using environmental variables and people´s knowledge

    PubMed Central

    2013-01-01

    Background Chagas disease is caused by the protozoan Trypanosoma cruzi, which is transmitted to mammal hosts by triatomine insect vectors. The goal of this study was to model the spatial distribution of triatomine species in an endemic area. Methods Vector’s locations were obtained with a rural householders’ survey. This information was combined with environmental data obtained from remote sensors, land use maps and topographic SRTM data, using the machine learning algorithm Random Forests to model species distribution. We analysed the combination of variables on three scales: 10 km, 5 km and 2.5 km cell size grids. Results The best estimation, explaining 46.2% of the triatomines spatial distribution, was obtained for 5 km of spatial resolution. Presence probability distribution increases from central Chile towards the north, tending to cover the central-coastal region and avoiding areas of the Andes range. Conclusions The methodology presented here was useful to model the distribution of triatomines in an endemic area; it is best explained using 5 km of spatial resolution, and their presence increases in the northern part of the study area. This study’s methodology can be replicated in other countries with Chagas disease or other vectorial transmitted diseases, and be used to locate high risk areas and to optimize resource allocation, for prevention and control of vectorial diseases. PMID:23724993

  2. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  3. Simulation of Swap-Out Reliability For The Advance Photon Source Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    2017-06-01

    The proposed upgrade of the Advanced Photon Source (APS) to a multibend-achromat lattice relies on the use of swap-out injection to accommodate the small dynamic acceptance, allow use of unusual insertion devices, and minimize collective effects at high single-bunch charge. This, combined with the short beam lifetime, will make injector reliability even more important than it is for top-up operation. We used historical data for the APS injector complex to obtain probability distributions for injector up-time and down-time durations. Using these distributions, we simulated several years of swap-out operation for the upgraded lattice for several operatingmodes. The results indicate thatmore » obtaining very high availability of beam in the storage ring will require improvements to injector reliability.« less

  4. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  5. Bayesian analysis of multimodal data and brain imaging

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; Backonja, Miroslav; Wakai, Ronald T.; Rutecki, Paul; Haughton, Victor

    2000-06-01

    It is often the case that information about a process can be obtained using a variety of methods. Each method is employed because of specific advantages over the competing alternatives. An example in medical neuro-imaging is the choice between fMRI and MEG modes where fMRI can provide high spatial resolution in comparison to the superior temporal resolution of MEG. The combination of data from varying modes provides the opportunity to infer results that may not be possible by means of any one mode alone. We discuss a Bayesian and learning theoretic framework for enhanced feature extraction that is particularly suited to multi-modal investigations of massive data sets from multiple experiments. In the following Bayesian approach, acquired knowledge (information) regarding various aspects of the process are all directly incorporated into the formulation. This information can come from a variety of sources. In our case, it represents statistical information obtained from other modes of data collection. The information is used to train a learning machine to estimate a probability distribution, which is used in turn by a second machine as a prior, in order to produce a more refined estimation of the distribution of events. The computational demand of the algorithm is handled by proposing a distributed parallel implementation on a cluster of workstations that can be scaled to address real-time needs if required. We provide a simulation of these methods on a set of synthetically generated MEG and EEG data. We show how spatial and temporal resolutions improve by using prior distributions. The method on fMRI signals permits one to construct the probability distribution of the non-linear hemodynamics of the human brain (real data). These computational results are in agreement with biologically based measurements of other labs, as reported to us by researchers from UK. We also provide preliminary analysis involving multi-electrode cortical recording that accompanies behavioral data in pain experiments on freely moving mice subjected to moderate heat delivered by an electric bulb. Summary of new or breakthrough ideas: (1) A new method to estimate probability distribution for measurement of nonlinear hemodynamics of brain from a multi- modal neuronal data. This is the first time that such an idea is tried, to our knowledge. (2) Breakthrough in improvement of time resolution of fMRI signals using (1) above.

  6. Probability distribution and statistical properties of spherically compensated cosmic regions in ΛCDM cosmology

    NASA Astrophysics Data System (ADS)

    Alimi, Jean-Michel; de Fromont, Paul

    2018-04-01

    The statistical properties of cosmic structures are well known to be strong probes for cosmology. In particular, several studies tried to use the cosmic void counting number to obtain tight constrains on dark energy. In this paper, we model the statistical properties of these regions using the CoSphere formalism (de Fromont & Alimi) in both primordial and non-linearly evolved Universe in the standard Λ cold dark matter model. This formalism applies similarly for minima (voids) and maxima (such as DM haloes), which are here considered symmetrically. We first derive the full joint Gaussian distribution of CoSphere's parameters in the Gaussian random field. We recover the results of Bardeen et al. only in the limit where the compensation radius becomes very large, i.e. when the central extremum decouples from its cosmic environment. We compute the probability distribution of the compensation size in this primordial field. We show that this distribution is redshift independent and can be used to model cosmic voids size distribution. We also derive the statistical distribution of the peak parameters introduced by Bardeen et al. and discuss their correlation with the cosmic environment. We show that small central extrema with low density are associated with narrow compensation regions with deep compensation density, while higher central extrema are preferentially located in larger but smoother over/under massive regions.

  7. Optimized lower leg injury probability curves from post-mortem human subject tests under axial impacts

    PubMed Central

    Yoganandan, Narayan; Arun, Mike W.J.; Pintar, Frank A.; Szabo, Aniko

    2015-01-01

    Objective Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. Methods The study re-examined lower leg PMHS data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and non-injury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the co-variable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal and log-logistic distributions was based on the Akaike Information Criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. Results The mean age, stature and weight: 58.2 ± 15.1 years, 1.74 ± 0.08 m and 74.9 ± 13.8 kg. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other two distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-old at five, 25 and 50% risk levels age groups for lower leg fracture. For 25, 45 and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. Conclusions This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines. PMID:25307381

  8. Sub-poissonian photon statistics in the coherent state Jaynes-Cummings model in non-resonance

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-tai; Fan, An-fu

    1992-03-01

    We study a model with a two-level atom (TLA) non-resonance interacting with a single-mode quantized cavity field (QCF). The photon number probability function, the mean photon number and Mandel's fluctuation parameter are calculated. The sub-Poissonian distributions of the photon statistics are obtained in non-resonance interaction. This statistical properties are strongly dependent on the detuning parameters.

  9. Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; Shah, Ashwin R.

    1997-01-01

    The properties of ceramic matrix composites (CMC's) are known to display a considerable amount of scatter due to variations in fiber/matrix properties, interphase properties, interphase bonding, amount of matrix voids, and many geometry- or fabrication-related parameters, such as ply thickness and ply orientation. This paper summarizes preliminary studies in which formal probabilistic descriptions of the material-behavior- and fabrication-related parameters were incorporated into micromechanics and macromechanics for CMC'S. In this process two existing methodologies, namely CMC micromechanics and macromechanics analysis and a fast probability integration (FPI) technique are synergistically coupled to obtain the probabilistic composite behavior or response. Preliminary results in the form of cumulative probability distributions and information on the probability sensitivities of the response to primitive variables for a unidirectional silicon carbide/reaction-bonded silicon nitride (SiC/RBSN) CMC are presented. The cumulative distribution functions are computed for composite moduli, thermal expansion coefficients, thermal conductivities, and longitudinal tensile strength at room temperature. The variations in the constituent properties that directly affect these composite properties are accounted for via assumed probabilistic distributions. Collectively, the results show that the present technique provides valuable information about the composite properties and sensitivity factors, which is useful to design or test engineers. Furthermore, the present methodology is computationally more efficient than a standard Monte-Carlo simulation technique; and the agreement between the two solutions is excellent, as shown via select examples.

  10. Finite-size scaling of survival probability in branching processes

    NASA Astrophysics Data System (ADS)

    Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro

    2015-04-01

    Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G (y ) =2 y ey /(ey-1 ) , with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.

  11. A Numerical Method for Obtaining Monoenergetic Neutron Flux Distributions and Transmissions in Multiple-Region Slabs

    NASA Technical Reports Server (NTRS)

    Schneider, Harold

    1959-01-01

    This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.

  12. An empirical model for earthquake probabilities in the San Francisco Bay region, California, 2002-2031

    USGS Publications Warehouse

    Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.

    2003-01-01

    The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the earthquake hazard and endorse the use of all credible earthquake probability models for the region, including the empirical model, with appropriate weighting, as was done in WGCEP (2002).

  13. Development and application of a probability distribution retrieval scheme to the remote sensing of clouds and precipitation

    NASA Astrophysics Data System (ADS)

    McKague, Darren Shawn

    2001-12-01

    The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)

  14. Conditional probability distribution function of "energy transfer rate" (PDF(ɛ|PVI)) as compared with its counterpart of temperature (PDF(T|PVI)) at the same condition of fluctuation

    NASA Astrophysics Data System (ADS)

    He, Jiansen; Wang, Yin; Pei, Zhongtian; Zhang, Lei; Tu, Chuanyi

    2017-04-01

    Energy transfer rate of turbulence is not uniform everywhere but suggested to follow a certain distribution, e.g., lognormal distribution (Kolmogorov 1962). The inhomogeneous transfer rate leads to emergence of intermittency, which may be identified with some parameter, e.g., normalized partial variance increments (PVI) (Greco et al., 2009). Large PVI of magnetic field fluctuations are found to have a temperature distribution with the median and mean values higher than that for small PVI level (Osman et al., 2012). However, there is a large proportion of overlap between temperature distributions associated with the smaller and larger PVIs. So it is recognized that only PVI cannot fully determine the temperature, since the one-to-one mapping relationship does not exist. One may be curious about the reason responsible for the considerable overlap of conditional temperature distribution for different levels of PVI. Usually the hotter plasma with higher temperature is speculated to be heated more with more dissipation of turbulence energy corresponding to more energy cascading rate, if the temperature fluctuation of the eigen wave mode is not taken into account. To explore the statistical relationship between turbulence cascading and plasma thermal state, we aim to study and reveal, for the first time, the conditional probability function of "energy transfer rate" under different levels of PVI condition (PDF(ɛ|PVI)), and compare it with the conditional probability function of temperature. The conditional probability distribution function, PDF(ɛ|PVI), is derived from PDF(PVI|ɛ)·PDF(ɛ)/PDF(PVI) according to the Bayesian theorem. PDF(PVI) can be obtained directly from the data. PDF(ɛ) is derived from the conjugate-gradient inversion of PDF(PVI) by assuming reasonably that PDF(δB|σ) is a Gaussian distribution, where PVI=|δB|/ σ and σ ( ɛι)1/3. PDF(ɛ) can also be acquired from fitting PDF(δB) with an integral function ∫PDF(δB|σ)PDF(σ)d σ. As a result, PDF(ɛ|PVI) is found to shift to higher median value of ɛ with increasing PVI but with a significant overlap of PDFs for different PVIs. Therefore, PDF(ɛ|PVI) is similar to PDF(T|PVI) in the sense of slow migration along with increasing PVI. The detailed comparison between these two conditional PDFs are also performed.

  15. Hidden regularity and universal classification of fast side chain motions in proteins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajeshwar, Rajitha; Smith, Jeremy C.; Krishnam, Marimuthu

    Proteins display characteristic dynamical signatures that appear to be universal across all proteins regardless of topology and size. Here, we systematically characterize the universal features of fast side chain motions in proteins by examining the conformational energy surfaces of individual residues obtained using enhanced sampling molecular dynamics simulation (618 free energy surfaces obtained from 0.94 s MD simulation). The side chain conformational free energy surfaces obtained using the adaptive biasing force (ABF) method for a set of eight proteins with different molecular weights and secondary structures are used to determine the methyl axial NMR order parameters (O axis 2), populationsmore » of side chain rotamer states (ρ), conformational entropies (S conf), probability fluxes, and activation energies for side chain inter-rotameric transitions. The free energy barriers separating side chain rotamer states range from 0.3 to 12 kcal/mol in all proteins and follow a trimodal distribution with an intense peak at ~5 kcal/mol and two shoulders at ~3 and ~7.5 kcal/mol, indicating that some barriers are more favored than others by proteins to maintain a balance between their conformational stability and flexibility. The origin and the influences of the trimodal barrier distribution on the distribution of O axis 2 and the side chain conformational entropy are discussed. A hierarchical grading of rotamer states based on the conformational free energy barriers, entropy, and probability flux reveals three distinct classes of side chains in proteins. A unique nonlinear correlation is established between O axis 2 and the side chain rotamer populations (ρ). In conclusion, the apparent universality in O axis 2 versus correlation, trimodal barrier distribution, and distinct characteristics of three classes of side chains observed among all proteins indicates a hidden regularity (or commonality) in the dynamical heterogeneity of fast side chain motions in proteins.« less

  16. Nonadditive entropies yield probability distributions with biases not warranted by the data.

    PubMed

    Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A

    2013-11-01

    Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.

  17. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  18. Robust optimization based upon statistical theory.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose distributions that are robust against interfraction and intrafraction motion alike, effectively removing the need for indiscriminate safety margins.

  19. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  20. Probability of a great earthquake to recur in the Tokai district, Japan: reevaluation based on newly-developed paleoseismology, plate tectonics, tsunami study, micro-seismicity and geodetic measurements

    NASA Astrophysics Data System (ADS)

    Rikitake, T.

    1999-03-01

    In light of newly-acquired geophysical information about earthquake generation in the Tokai area, Central Japan, where occurrence of a great earthquake of magnitude 8 or so has recently been feared, probabilities of earthquake occurrence in the near future are reevaluated. Much of the data used for evaluation here relies on recently-developed paleoseismology, tsunami study and GPS geodesy.The new Weibull distribution analysis of recurrence tendency of great earthquakes in the Tokai-Nankai zone indicates that the mean return period of great earthquakes there is estimated as 109 yr with a standard deviation amounting to 33 yr. These values do not differ much from those of previous studies (Rikitake, 1976, 1986; Utsu, 1984).Taking the newly-determined velocities of the motion of Philippine Sea plate at various portions of the Tokai-Nankai zone into account, the ultimate displacements to rupture at the plate boundary are obtained. A Weibull distribution analysis results in the mean ultimate displacement amounting to 4.70 m with a standard deviation estimated as 0.86 m. A return period amounting to 117 yr is obtained at the Suruga Bay portion by dividing the mean ultimate displacement by the relative plate velocity.With the aid of the fault models as determined from the tsunami studies, the increases in the cumulative seismic slips associated with the great earthquakes are examined at various portions of the zone. It appears that a slip-predictable model can better be applied to the occurrence mode of great earthquakes in the zone than a time-predictable model. The crustal strain accumulating over the Tokai area as estimated from the newly-developed geodetic work including the GPS observations is compared to the ultimate strain presumed by the above two models.The probabilities for a great earthquake to recur in the Tokai district are then estimated with the aid of the Weibull analysis parameters obtained for the four cases discussed in the above. All the probabilities evaluated for the four cases take on values ranging 35-45 percent for a ten-year period following the year 2000.

  1. ProbOnto: ontology and knowledge base of probability distributions.

    PubMed

    Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala

    2016-09-01

    Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  2. PRODIGEN: visualizing the probability landscape of stochastic gene regulatory networks in state and time space.

    PubMed

    Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta

    2017-02-15

    Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.

  3. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less

  4. Probability bounds analysis for nonlinear population ecology models.

    PubMed

    Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A

    2015-09-01

    Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.

  5. PROCOS: computational analysis of protein-protein complexes.

    PubMed

    Fink, Florian; Hochrein, Jochen; Wolowski, Vincent; Merkl, Rainer; Gronwald, Wolfram

    2011-09-01

    One of the main challenges in protein-protein docking is a meaningful evaluation of the many putative solutions. Here we present a program (PROCOS) that calculates a probability-like measure to be native for a given complex. In contrast to scores often used for analyzing complex structures, the calculated probabilities offer the advantage of providing a fixed range of expected values. This will allow, in principle, the comparison of models corresponding to different targets that were solved with the same algorithm. Judgments are based on distributions of properties derived from a large database of native and false complexes. For complex analysis PROCOS uses these property distributions of native and false complexes together with a support vector machine (SVM). PROCOS was compared to the established scoring schemes of ZRANK and DFIRE. Employing a set of experimentally solved native complexes, high probability values above 50% were obtained for 90% of these structures. Next, the performance of PROCOS was tested on the 40 binary targets of the Dockground decoy set, on 14 targets of the RosettaDock decoy set and on 9 targets that participated in the CAPRI scoring evaluation. Again the advantage of using a probability-based scoring system becomes apparent and a reasonable number of near native complexes was found within the top ranked complexes. In conclusion, a novel fully automated method is presented that allows the reliable evaluation of protein-protein complexes. Copyright © 2011 Wiley Periodicals, Inc.

  6. A moment-convergence method for stochastic analysis of biochemical reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiajun; Nie, Qing; Zhou, Tianshou, E-mail: mcszhtsh@mail.sysu.edu.cn

    Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in termsmore » of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.« less

  7. Open Quantum Walks with Noncommuting Jump Operators

    NASA Astrophysics Data System (ADS)

    Caballar, Roland Cristopher; Petruccione, Francesco; Sinayskiy, Ilya

    2014-03-01

    We examine homogeneous open quantum walks along a line, wherein each forward step is due to one quantum jump operator, and each backward step due to another quantum jump operator. We assume that these two quantum jump operators do not commute with each other. We show that if the system has N internal degrees of freedom, for particular forms of these quantum jump operators, we can obtain exact probability distributions which fall into two distinct classes, namely Gaussian distributions and solitonic distributions. We also show that it is possible for a maximum of 2 solitonic distributions to be present simultaneously in the system. Finally, we consider applications of these classes of jump operators in quantum state preparation and quantum information. We acknowledge support from the National Institute for Theoretical Physics (NITheP).

  8. Correlated continuous time random walk and option pricing

    NASA Astrophysics Data System (ADS)

    Lv, Longjin; Xiao, Jianbin; Fan, Liangzhong; Ren, Fuyao

    2016-04-01

    In this paper, we study a correlated continuous time random walk (CCTRW) with averaged waiting time, whose probability density function (PDF) is proved to follow stretched Gaussian distribution. Then, we apply this process into option pricing problem. Supposing the price of the underlying is driven by this CCTRW, we find this model captures the subdiffusive characteristic of financial markets. By using the mean self-financing hedging strategy, we obtain the closed-form pricing formulas for a European option with and without transaction costs, respectively. At last, comparing the obtained model with the classical Black-Scholes model, we find the price obtained in this paper is higher than that obtained from the Black-Scholes model. A empirical analysis is also introduced to confirm the obtained results can fit the real data well.

  9. Incorporating Skew into RMS Surface Roughness Probability Distribution

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.; Stahl, H. Philip.

    2013-01-01

    The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.

  10. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  11. Transport of polar and non-polar solvents through a carbon nanotube

    NASA Astrophysics Data System (ADS)

    Chopra, Manish; Phatak, Rohan; Choudhury, N.

    2013-02-01

    Transport of water through narrow pores is important in chemistry, biology and material science. In this work, we employ atomistic molecular dynamics (MD) simulations to carry out a comparative study of the transport of a polar and a non-polar solvent through a carbon nanotube (CNT). The flow of water as well as methane through the nanotube is estimated in terms of number of translocation events and is compared. Transport events occurred in bursts of unidirectional translocation pulses in both the cases. Probability density and cumulative probability distribution functions are obtained for the translocated particles and particles coming out from same side with respect to the time they spent in the nano channel.

  12. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  13. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    PubMed

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity scales is found for the product alpha(tau)h(tau)*. These results indicate that compact globular proteins are consistent with a thermodynamic system governed by hydrophobic-like energy functions, with reduced distances from the geometrical center, reflecting atomic burials, and provide a conceptual framework for the eventual prediction from sequence of a few parameters from which whole atomic probability distributions and potentials of mean force can be reconstructed. Copyright 2006 Wiley-Liss, Inc.

  14. Modeling the Dependency Structure of Integrated Intensity Processes

    PubMed Central

    Ma, Yong-Ki

    2015-01-01

    This paper studies an important issue of dependence structure. To model this structure, the intensities within the Cox processes are driven by dependent shot noise processes, where jumps occur simultaneously and their sizes are correlated. The joint survival probability of the integrated intensities is explicitly obtained from the copula with exponential marginal distributions. Subsequently, this result can provide a very useful guide for credit risk management. PMID:26270638

  15. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  16. Superconducting surface impedance under radiofrequency field

    DOE PAGES

    Xiao, Binping P.; Reece, Charles E.; Kelley, Michael J.

    2013-04-26

    Based on BCS theory with moving Cooper pairs, the electron states distribution at 0K and the probability of electron occupation with finite temperature have been derived and applied to anomalous skin effect theory to obtain the surface impedance of a superconductor under radiofrequency (RF) field. We present the numerical results for Nb and compare these with representative RF field-dependent effective surface resistance measurements from a 1.5 GHz resonant structure.

  17. Insights into the dynamics of planetary interiors obtained through the study of global distribution of volcanoes I: Empirical calibration on Earth

    NASA Astrophysics Data System (ADS)

    Cañon-Tapia, Edgardo; Mendoza-Borunda, Ramón

    2014-06-01

    The distribution of volcanic features is ultimately controlled by processes taking place beneath the surface of a planet. For this reason, characterization of volcano distribution at a global scale can be used to obtain insights concerning dynamic aspects of planetary interiors. Until present, studies of this type have focused on volcanic features of a specific type, or have concentrated on relatively small regions. In this paper, (the first of a series of three papers) we describe the distribution of volcanic features observed over the entire surface of the Earth, combining an extensive database of submarine and subaerial volcanoes. The analysis is based on spatial density contours obtained with the Fisher kernel. Based on an empirical approach that makes no a priori assumptions concerning the number of modes that should characterize the density distribution of volcanism we identified the most significant modes. Using those modes as a base, the relevant distance for the formation of clusters of volcanoes is constrained to be on the order of 100 to 200 km. In addition, it is noted that the most significant modes lead to the identification of clusters that outline the most important tectonic margins on Earth without the need of making any ad hoc assumptions. Consequently, we suggest that this method has the potential of yielding insights about the probable occurrence of tectonic features within other planets.

  18. Stable laws and cosmic ray physics

    NASA Astrophysics Data System (ADS)

    Genolini, Y.; Salati, P.; Serpico, P. D.; Taillet, R.

    2017-04-01

    Context. In the new "precision era" for cosmic ray astrophysics, scientists making theoretical predictions cannot content themselves with average trends, but need to correctly take into account intrinsic uncertainties. The space-time discreteness of the cosmic ray sources, together with a substantial ignorance of their precise epochs and locations (with the possible exception of the most recent and close ones) play an important role in this sense. Aims: We elaborate a statistical theory to deal with this problem, relating the composite probability P(Ψ) to obtain a flux Ψ at the Earth and the single-source probability p(ψ) to contribute with a flux ψ. The main difficulty arises from the fact that p(ψ) is a "heavy tail" distribution, characterized by power-law or broken power-law behavior up to very large fluxes, for which the central limit theorem does not hold, and leading to distributions different from Gaussian. The functional form of the distribution for the aggregated flux is nonetheless unchanged by its own convolution, that is, it belongs to the so-called stable laws class. Methods: We analytically discuss the regime of validity of the stable laws associated with the distributions arising in cosmic ray astrophysics, as well as the limitations to the treatment imposed by causal considerations and partial source catalog knowledge. We validate our results with extensive Monte Carlo simulations, for different regimes of propagation parameters and energies. Results: We find that relatively simple recipes provide a satisfactory description of the probability P(Ψ). We also find that a naive Gaussian fit to simulation results would underestimate the probability of very large fluxes, that is, several times above the average, while overestimating the probability of relatively milder excursions. At large energies, large flux fluctuations are prevented by causal considerations, while at low energies, a partial knowledge of the recent and nearby population of sources plays an important role. A few proposals have been recently discussed in the literature to account for spectral breaks reported in cosmic ray data in terms of local contributions. We apply our newly developed theory to assess their probabilities, finding that they are relatively small, typically at the 0.1% level or smaller, never exceeding 1%. Conclusions: The use of heavy tail distributions is relevant in assessing how likely a measured cosmic ray flux is to depart from the average expectation in a given model. The existing mathematical theory leading to stable laws can be adapted to the case of interest via some recipes that closely reproduce numerical simulations and are relatively easy to implement.

  19. Predicting the probability of slip in gait: methodology and distribution study.

    PubMed

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  20. Integrated-Circuit Pseudorandom-Number Generator

    NASA Technical Reports Server (NTRS)

    Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur

    1992-01-01

    Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.

  1. Maximum aposteriori joint source/channel coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Gibson, Jerry D.

    1991-01-01

    A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.

  2. Decameter-wave radio observations of Jupiter during the 1977 apparition

    NASA Technical Reports Server (NTRS)

    Alexander, J. K.; Kaiser, M. L.; Thieman, J. R.; Vaughan, S. S.

    1978-01-01

    A catalog of observations of Jupiter's sporadic decameter wavelength radio emissions obtained with the Goddard Space Flight Center Jupiter Monitor Network between June 1977 and May 1978 is presented. Data were collected using the Goddard Space Flight Center station in Greenbelt, MD. and at facilities installed at Orroral Valley (Canberra), Australia and the Nancay Radio Observatory in France. Observations were obtained daily at frequencies of 16.7 and 22.2 MHz using five-element Yagi antennas at each end of a two-element interferometer. Plots of the two dimensional emission occurrence probability distribution are given.

  3. Zipf exponent of trajectory distribution in the hidden Markov model

    NASA Astrophysics Data System (ADS)

    Bochkarev, V. V.; Lerner, E. Yu

    2014-03-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.

  4. On estimating the phase of periodic waveform in additive Gaussian noise, part 2

    NASA Astrophysics Data System (ADS)

    Rauch, L. L.

    1984-11-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  5. On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2

    NASA Technical Reports Server (NTRS)

    Rauch, L. L.

    1984-01-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  6. The Pitman-Yor Process and an Empirical Study of Choice Behavior

    NASA Astrophysics Data System (ADS)

    Hisakado, Masato; Sano, Fumiaki; Mori, Shintaro

    2018-02-01

    This study discusses choice behavior using a voting model in which voters can obtain information from a finite number of previous r voters. Voters vote for a candidate with a probability proportional to the previous vote ratio, which is visible to the voters. We obtain the Pitman sampling formula as the equilibrium distribution of r votes. We present the model as a process of posting on a bulletin board system, 2ch.net, where users can choose one of many threads to create a post. We explore how this choice depends on the last r posts and the distribution of these last r posts across threads. We conclude that the posting process is described by our voting model with analog herders for a small r, which might correspond to the time horizon of users' responses.

  7. Quantum Inference on Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Yoder, Theodore; Low, Guang Hao; Chuang, Isaac

    2014-03-01

    Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.

  8. Population Ecology of Nitrifiers in a Stream Receiving Geothermal Inputs of Ammonium

    PubMed Central

    Cooper, A. Bryce

    1983-01-01

    The distribution, activity, and generic diversity of nitrifying bacteria in a stream receiving geothermal inputs of ammonium were studied. The high estimated rates of benthic nitrate flux (33 to 75 mg of N · m−2 · h−1) were a result of the activity of nitrifiers located in the sediment. Nitrifying potentials and ammonium oxidizer most probable numbers in the sediments were at least one order of magnitude higher than those in the waters. Nitrifiers in the oxygenated surface (0 to 2 cm) sediments were limited by suboptimal temperature, pH, and substrate level. Nitrifiers in deep (nonsurface) oxygenated sediments did not contribute significantly to the changes measured in the levels of inorganic nitrogen species in the overlying waters and presumably derived their ammonium supply from ammonification within the sediment. Ammonium-oxidizing isolates obtained by a most-probable number nonenrichment procedure were species of either Nitrosospira or Nitrosomonas, whereas all those obtained by an enrichment procedure (i.e., selective culture) were Nitrosomonas spp. The efficiency of the most-probable-number method for enumerating ammonium oxidizers was calculated to be between 0.05 and 2.0%, suggesting that measurements of nitrifying potentials provide a better estimate of nitrifying populations. PMID:16346261

  9. Diameter distribution in a Brazilian tropical dry forest domain: predictions for the stand and species.

    PubMed

    Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.

  10. Experimental Study of the Effect of the Initial Spectrum Width on the Statistics of Random Wave Groups

    NASA Astrophysics Data System (ADS)

    Shemer, L.; Sergeeva, A.

    2009-12-01

    The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.

  11. Effects of ignition location models on the burn patterns of simulated wildfires

    USGS Publications Warehouse

    Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.

    2011-01-01

    Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.

  12. Main Geomagnetic Field Models from Oersted and Magsat Data Via a Rigorous General Inverse Theory with Error Bounds

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1999-01-01

    The purpose of the grant was to study how prior information about the geomagnetic field can be used to interpret surface and satellite magnetic measurements, to generate quantitative descriptions of prior information that might be so used, and to use this prior information to obtain from satellite data a model of the core field with statistically justifiable error estimates. The need for prior information in geophysical inversion has long been recognized. Data sets are finite, and faithful descriptions of aspects of the earth almost always require infinite-dimensional model spaces. By themselves, the data can confine the correct earth model only to an infinite-dimensional subset of the model space. Earth properties other than direct functions of the observed data cannot be estimated from those data without prior information about the earth. Prior information is based on what the observer already knows before the data become available. Such information can be "hard" or "soft". Hard information is a belief that the real earth must lie in some known region of model space. For example, the total ohmic dissipation in the core is probably less that the total observed geothermal heat flow out of the earth's surface. (In principle, ohmic heat in the core can be recaptured to help drive the dynamo, but this effect is probably small.) "Soft" information is a probability distribution on the model space, a distribution that the observer accepts as a quantitative description of her/his beliefs about the earth. The probability distribution can be a subjective prior in the sense of Bayes or the objective result of a statistical study of previous data or relevant theories.

  13. Fast-NPS-A Markov Chain Monte Carlo-based analysis tool to obtain structural information from single-molecule FRET measurements

    NASA Astrophysics Data System (ADS)

    Eilert, Tobias; Beckers, Maximilian; Drechsler, Florian; Michaelis, Jens

    2017-10-01

    The analysis tool and software package Fast-NPS can be used to analyse smFRET data to obtain quantitative structural information about macromolecules in their natural environment. In the algorithm a Bayesian model gives rise to a multivariate probability distribution describing the uncertainty of the structure determination. Since Fast-NPS aims to be an easy-to-use general-purpose analysis tool for a large variety of smFRET networks, we established an MCMC based sampling engine that approximates the target distribution and requires no parameter specification by the user at all. For an efficient local exploration we automatically adapt the multivariate proposal kernel according to the shape of the target distribution. In order to handle multimodality, the sampler is equipped with a parallel tempering scheme that is fully adaptive with respect to temperature spacing and number of chains. Since the molecular surrounding of a dye molecule affects its spatial mobility and thus the smFRET efficiency, we introduce dye models which can be selected for every dye molecule individually. These models allow the user to represent the smFRET network in great detail leading to an increased localisation precision. Finally, a tool to validate the chosen model combination is provided. Programme Files doi:http://dx.doi.org/10.17632/7ztzj63r68.1 Licencing provisions: Apache-2.0 Programming language: GUI in MATLAB (The MathWorks) and the core sampling engine in C++ Nature of problem: Sampling of highly diverse multivariate probability distributions in order to solve for macromolecular structures from smFRET data. Solution method: MCMC algorithm with fully adaptive proposal kernel and parallel tempering scheme.

  14. Occupancy models for monitoring marine fish: a bayesian hierarchical approach to model imperfect detection with a novel gear combination.

    PubMed

    Coggins, Lewis G; Bacheler, Nathan M; Gwinn, Daniel C

    2014-01-01

    Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics.

  15. Occupancy Models for Monitoring Marine Fish: A Bayesian Hierarchical Approach to Model Imperfect Detection with a Novel Gear Combination

    PubMed Central

    Coggins, Lewis G.; Bacheler, Nathan M.; Gwinn, Daniel C.

    2014-01-01

    Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics. PMID:25255325

  16. A Statistical Treatment of Bioassay Pour Fractions

    NASA Technical Reports Server (NTRS)

    Barengoltz, Jack; Hughes, David W.

    2014-01-01

    The binomial probability distribution is used to treat the statistics of a microbiological sample that is split into two parts, with only one part evaluated for spore count. One wishes to estimate the total number of spores in the sample based on the counts obtained from the part that is evaluated (pour fraction). Formally, the binomial distribution is recharacterized as a function of the observed counts (successes), with the total number (trials) an unknown. The pour fraction is the probability of success per spore (trial). This distribution must be renormalized in terms of the total number. Finally, the new renormalized distribution is integrated and mathematically inverted to yield the maximum estimate of the total number as a function of a desired level of confidence ( P(

  17. On the objective identification of flood seasons

    NASA Astrophysics Data System (ADS)

    Cunderlik, Juraj M.; Ouarda, Taha B. M. J.; BobéE, Bernard

    2004-01-01

    The determination of seasons of high and low probability of flood occurrence is a task with many practical applications in contemporary hydrology and water resources management. Flood seasons are generally identified subjectively by visually assessing the temporal distribution of flood occurrences and, then at a regional scale, verified by comparing the temporal distribution with distributions obtained at hydrologically similar neighboring sites. This approach is subjective, time consuming, and potentially unreliable. The main objective of this study is therefore to introduce a new, objective, and systematic method for the identification of flood seasons. The proposed method tests the significance of flood seasons by comparing the observed variability of flood occurrences with the theoretical flood variability in a nonseasonal model. The method also addresses the uncertainty resulting from sampling variability by quantifying the probability associated with the identified flood seasons. The performance of the method was tested on an extensive number of samples with different record lengths generated from several theoretical models of flood seasonality. The proposed approach was then applied on real data from a large set of sites with different flood regimes across Great Britain. The results show that the method can efficiently identify flood seasons from both theoretical and observed distributions of flood occurrence. The results were used for the determination of the main flood seasonality types in Great Britain.

  18. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  19. Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2009-01-01

    Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.

  20. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  2. Positive phase space distributions and uncertainty relations

    NASA Technical Reports Server (NTRS)

    Kruger, Jan

    1993-01-01

    In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.

  3. Soccer Matches as Experiments - How Often Does the 'Best' Team Win?

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald K.; Freeman, G. H.

    2009-01-01

    Models in which the number of goals scored by a team in a soccer match follow a Poisson distribution or a closely related one, have been widely discussed. We here consider a soccer match as an experiment to assess which of two teams is superior and examine the probability that the outcome of the experiment (match) truly represents the relative abilities of the two teams. Given a final score it is possible by using a Bayesian approach to quantify the probability that it was or was not the case that the best team won. For typical scores, the probability of a misleading result is significant. Modifying the rules of the game to increase thc typical number of goals scored would improve the situation, but a level of confidence that would normally be regarded as satisfactory could not be obtained unless the character of the game were radically changed.

  4. Are Soccer Matches Badly Designed Experiments?

    NASA Technical Reports Server (NTRS)

    Skinner, G. K.; Freeman, G. H.

    2008-01-01

    Models in which the number of goals scored by a team in a soccer match follow a Poisson distribution, or a closely related one. have been widely discussed. We here consider a soccer match as an experiment to assess which of two teams is superior and examine the probability that the outcome of the experiment (match) truly represents the relative abilities of the two teams. Given a final score it is possible by using a Bayesian approach to quantify the probability that it was or was not the case that 'the best team won'. For typical scores, the probability of a misleading result is significant. Modifying the rules of the game to increase the typical number of goals scored would improve the situation, but a level of confidence that would normally be regarded as satisfactory could not be obtained unless the character of the game were radically changed.

  5. Generalized Arcsine Laws for Fractional Brownian Motion

    NASA Astrophysics Data System (ADS)

    Sadhu, Tridib; Delorme, Mathieu; Wiese, Kay Jörg

    2018-01-01

    The three arcsine laws for Brownian motion are a cornerstone of extreme-value statistics. For a Brownian Bt starting from the origin, and evolving during time T , one considers the following three observables: (i) the duration t+ the process is positive, (ii) the time tlast the process last visits the origin, and (iii) the time tmax when it achieves its maximum (or minimum). All three observables have the same cumulative probability distribution expressed as an arcsine function, thus the name arcsine laws. We show how these laws change for fractional Brownian motion Xt, a non-Markovian Gaussian process indexed by the Hurst exponent H . It generalizes standard Brownian motion (i.e., H =1/2 ). We obtain the three probabilities using a perturbative expansion in ɛ =H -1/2 . While all three probabilities are different, this distinction can only be made at second order in ɛ . Our results are confirmed to high precision by extensive numerical simulations.

  6. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  7. Multinomial Logistic Regression Predicted Probability Map To Visualize The Influence Of Socio-Economic Factors On Breast Cancer Occurrence in Southern Karnataka

    NASA Astrophysics Data System (ADS)

    Madhu, B.; Ashok, N. C.; Balasubramanian, S.

    2014-11-01

    Multinomial logistic regression analysis was used to develop statistical model that can predict the probability of breast cancer in Southern Karnataka using the breast cancer occurrence data during 2007-2011. Independent socio-economic variables describing the breast cancer occurrence like age, education, occupation, parity, type of family, health insurance coverage, residential locality and socioeconomic status of each case was obtained. The models were developed as follows: i) Spatial visualization of the Urban- rural distribution of breast cancer cases that were obtained from the Bharat Hospital and Institute of Oncology. ii) Socio-economic risk factors describing the breast cancer occurrences were complied for each case. These data were then analysed using multinomial logistic regression analysis in a SPSS statistical software and relations between the occurrence of breast cancer across the socio-economic status and the influence of other socio-economic variables were evaluated and multinomial logistic regression models were constructed. iii) the model that best predicted the occurrence of breast cancer were identified. This multivariate logistic regression model has been entered into a geographic information system and maps showing the predicted probability of breast cancer occurrence in Southern Karnataka was created. This study demonstrates that Multinomial logistic regression is a valuable tool for developing models that predict the probability of breast cancer Occurrence in Southern Karnataka.

  8. Regional flood probabilities

    USGS Publications Warehouse

    Troutman, Brent M.; Karlinger, Michael R.

    2003-01-01

    The T‐year annual maximum flood at a site is defined to be that streamflow, that has probability 1/T of being exceeded in any given year, and for a group of sites the corresponding regional flood probability (RFP) is the probability that at least one site will experience a T‐year flood in any given year. The RFP depends on the number of sites of interest and on the spatial correlation of flows among the sites. We present a Monte Carlo method for obtaining the RFP and demonstrate that spatial correlation estimates used in this method may be obtained with rank transformed data and therefore that knowledge of the at‐site peak flow distribution is not necessary. We examine the extent to which the estimates depend on specification of a parametric form for the spatial correlation function, which is known to be nonstationary for peak flows. It is shown in a simulation study that use of a stationary correlation function to compute RFPs yields satisfactory estimates for certain nonstationary processes. Application of asymptotic extreme value theory is examined, and a methodology for separating channel network and rainfall effects on RFPs is suggested. A case study is presented using peak flow data from the state of Washington. For 193 sites in the Puget Sound region it is estimated that a 100‐year flood will occur on the average every 4.5 years.

  9. A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers

    PubMed Central

    Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin

    2018-01-01

    In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348

  10. Scaling exponents for ordered maxima

    DOE PAGES

    Ben-Naim, E.; Krapivsky, P. L.; Lemons, N. W.

    2015-12-22

    We study extreme value statistics of multiple sequences of random variables. For each sequence with N variables, independently drawn from the same distribution, the running maximum is defined as the largest variable to date. We compare the running maxima of m independent sequences and investigate the probability S N that the maxima are perfectly ordered, that is, the running maximum of the first sequence is always larger than that of the second sequence, which is always larger than the running maximum of the third sequence, and so on. The probability S N is universal: it does not depend on themore » distribution from which the random variables are drawn. For two sequences, S N~N –1/2, and in general, the decay is algebraic, S N~N –σm, for large N. We analytically obtain the exponent σ 3≅1.302931 as root of a transcendental equation. Moreover, the exponents σ m grow with m, and we show that σ m~m for large m.« less

  11. Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Errington, Jeffrey R.

    2003-06-01

    An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.

  12. Statistical analysis of PM₁₀ concentrations at different locations in Malaysia.

    PubMed

    Sansuddin, Nurulilyana; Ramli, Nor Azam; Yahaya, Ahmad Shukri; Yusof, Noor Faizah Fitri Md; Ghazali, Nurul Adyani; Madhoun, Wesam Ahmed Al

    2011-09-01

    Malaysia has experienced several haze events since the 1980s as a consequence of the transboundary movement of air pollutants emitted from forest fires and open burning activities. Hazy episodes can result from local activities and be categorized as "localized haze". General probability distributions (i.e., gamma and log-normal) were chosen to analyze the PM(10) concentrations data at two different types of locations in Malaysia: industrial (Johor Bahru and Nilai) and residential (Kota Kinabalu and Kuantan). These areas were chosen based on their frequently high PM(10) concentration readings. The best models representing the areas were chosen based on their performance indicator values. The best distributions provided the probability of exceedances and the return period between the actual and predicted concentrations based on the threshold limit given by the Malaysian Ambient Air Quality Guidelines (24-h average of 150 μg/m(3)) for PM(10) concentrations. The short-term prediction for PM(10) exceedances in 14 days was obtained using the autoregressive model.

  13. Ubiquity of Benford's law and emergence of the reciprocal distribution

    DOE PAGES

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    2016-04-07

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  14. An Efficient Numerical Approach for Nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin; Vedula, Prakash

    2009-03-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.

  15. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  16. Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain

    NASA Astrophysics Data System (ADS)

    Žnidarič, Marko

    2014-01-01

    We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.

  17. Spatial Probability Distribution of Strata's Lithofacies and its Impacts on Land Subsidence in Huairou Emergency Water Resources Region of Beijing

    NASA Astrophysics Data System (ADS)

    Li, Y.; Gong, H.; Zhu, L.; Guo, L.; Gao, M.; Zhou, C.

    2016-12-01

    Continuous over-exploitation of groundwater causes dramatic drawdown, and leads to regional land subsidence in the Huairou Emergency Water Resources region, which is located in the up-middle part of the Chaobai river basin of Beijing. Owing to the spatial heterogeneity of strata's lithofacies of the alluvial fan, ground deformation has no significant positive correlation with groundwater drawdown, and one of the challenges ahead is to quantify the spatial distribution of strata's lithofacies. The transition probability geostatistics approach provides potential for characterizing the distribution of heterogeneous lithofacies in the subsurface. Combined the thickness of clay layer extracted from the simulation, with deformation field acquired from PS-InSAR technology, the influence of strata's lithofacies on land subsidence can be analyzed quantitatively. The strata's lithofacies derived from borehole data were generalized into four categories and their probability distribution in the observe space was mined by using the transition probability geostatistics, of which clay was the predominant compressible material. Geologically plausible realizations of lithofacies distribution were produced, accounting for complex heterogeneity in alluvial plain. At a particular probability level of more than 40 percent, the volume of clay defined was 55 percent of the total volume of strata's lithofacies. This level, equaling nearly the volume of compressible clay derived from the geostatistics, was thus chosen to represent the boundary between compressible and uncompressible material. The method incorporates statistical geological information, such as distribution proportions, average lengths and juxtaposition tendencies of geological types, mainly derived from borehole data and expert knowledge, into the Markov chain model of transition probability. Some similarities of patterns were indicated between the spatial distribution of deformation field and clay layer. In the area with roughly similar water table decline, locations in the subsurface having a higher probability for the existence of compressible material occur more than that in the location with a lower probability. Such estimate of spatial probability distribution is useful to analyze the uncertainty of land subsidence.

  18. The exact probability distribution of the rank product statistics for replicated experiments.

    PubMed

    Eisinga, Rob; Breitling, Rainer; Heskes, Tom

    2013-03-18

    The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  19. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  20. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  1. Possible climate change over Eurasia under different emission scenarios

    NASA Astrophysics Data System (ADS)

    Sokolov, A. P.; Monier, E.; Scott, J. R.; Forest, C. E.; Schlosser, C. A.

    2011-12-01

    In an attempt to evaluate possible climate change over EURASIA, we analyze results of six AMIP type simulations with CAM version 3 (CAM3) at 2x2.5 degree resolution. CAM3 is driven by time series of sea surface temperatures (SSTs) and sea ice obtained by running the MIT IGSM2.3, which consists of a 3D ocean GCM coupled to a zonally-averaged atmospheric climate-chemistry model. In addition to changes in SSTs, CAM3 is forced by changes in greenhouse gases and ozone concentrations, sulfate aerosol forcing and black carbon loading calculated by the IGSM2.3. An essential feature of the IGSM is the possibility to vary its climate sensitivity (using a cloud adjustment technique) and the strength of the aerosol forcing. For consistency, new modules were developed in CAM3 to modify its climate sensitivity and aerosol forcing to match those used in the simulations with the IGSM2.3. The simulations presented in this paper were carried out for two emission scenarios, a "Business as usual" scenario and a 660 ppm of CO2-EQ stabilization, which are similar to the RCP8.5 and RCP4.5 scenarios, respectively. Values of climate sensitivity used in the simulations within the IGSM-CAM framework are median and the bounds of the 90% probability interval of the probability distribution obtained by comparing the 20th century climate simulated by different versions of the IGSM with observations. The associated strength of the aerosol forcing was chosen to ensure a good agreement with the observed climate change over the 20th century. Because the concentration of sulfate aerosol significantly decreases over the 21st century in both emissions scenarios, climate changes obtained in these simulations provide a good approximation for the median, and the 5th and 95th percentiles of the probability distribution of 21st century climate change.

  2. Possible climate change over Eurasia under different emission scenarios

    NASA Astrophysics Data System (ADS)

    Sokolov, A. P.; Monier, E.; Gao, X.

    2012-12-01

    In an attempt to evaluate possible climate change over EURASIA, we analyze results of six AMIP type simulations with CAM version 3 (CAM3) at 2x2.5 degree resolution. CAM3 is driven by time series of sea surface temperatures (SSTs) and sea ice obtained by running the MIT IGSM2.3, which consists of a 3D ocean GCM coupled to a zonally-averaged atmospheric climate-chemistry model. In addition to changes in SSTs, CAM3 is forced by changes in greenhouse gases and ozone concentrations, sulfate aerosol forcing and black carbon loading calculated by the IGSM2.3. An essential feature of the IGSM is the possibility to vary its climate sensitivity (using a cloud adjustment technique) and the strength of the aerosol forcing. For consistency, new modules were developed in CAM3 to modify its climate sensitivity and aerosol forcing to match those used in the simulations with the IGSM2.3. The simulations presented in this paper were carried out for two emission scenarios, a "Business as usual" scenario and a 660 ppm of CO2-EQ stabilization, which are similar to the RCP8.5 and RCP4.5 scenarios, respectively. Values of climate sensitivity used in the simulations within the IGSM-CAM framework are median and the bounds of the 90% probability interval of the probability distribution obtained by comparing the 20th century climate simulated by different versions of the IGSM with observations. The associated strength of the aerosol forcing was chosen to ensure a good agreement with the observed climate change over the 20th century. Because the concentration of sulfate aerosol significantly decreases over the 21st century in both emissions scenarios, climate changes obtained in these simulations provide a good approximation for the median, and the 5th and 95th percentiles of the probability distribution of 21st century climate change.

  3. Probabilistic biosphere modeling for the long-term safety assessment of geological disposal facilities for radioactive waste using first- and second-order Monte Carlo simulation.

    PubMed

    Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald

    2018-10-01

    In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Wave-height hazard analysis in Eastern Coast of Spain - Bayesian approach using generalized Pareto distribution

    NASA Astrophysics Data System (ADS)

    Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.

    2005-03-01

    Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.

  5. Characterizing ISI and sub-threshold membrane potential distributions: Ensemble of IF neurons with random squared-noise intensity.

    PubMed

    Kumar, Sanjeev; Karmeshu

    2018-04-01

    A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions

    PubMed Central

    König, Sandra; Schauer, Stefan

    2016-01-01

    Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572

  7. Determination of mean rainfall from the Special Sensor Microwave/Imager (SSM/I) using a mixed lognormal distribution

    NASA Technical Reports Server (NTRS)

    Berg, Wesley; Chase, Robert

    1992-01-01

    Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.

  8. Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DallAnese, Emiliano; Baker, Kyri; Summers, Tyler

    This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less

  9. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  10. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood

    NASA Astrophysics Data System (ADS)

    Dinh, Khanh N.; Sidje, Roger B.

    2017-12-01

    Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.

  12. Evaluation of carotid plaque echogenicity based on the integral of the cumulative probability distribution using gray-scale ultrasound images.

    PubMed

    Huang, Xiaowei; Zhang, Yanling; Meng, Long; Abbott, Derek; Qian, Ming; Wong, Kelvin K L; Zheng, Rongqing; Zheng, Hairong; Niu, Lili

    2017-01-01

    Carotid plaque echogenicity is associated with the risk of cardiovascular events. Gray-scale median (GSM) of the ultrasound image of carotid plaques has been widely used as an objective method for evaluation of plaque echogenicity in patients with atherosclerosis. We proposed a computer-aided method to evaluate plaque echogenicity and compared its efficiency with GSM. One hundred and twenty-five carotid plaques (43 echo-rich, 35 intermediate, 47 echolucent) were collected from 72 patients in this study. The cumulative probability distribution curves were obtained based on statistics of the pixels in the gray-level images of plaques. The area under the cumulative probability distribution curve (AUCPDC) was calculated as its integral value to evaluate plaque echogenicity. The classification accuracy for three types of plaques is 78.4% (kappa value, κ = 0.673), when the AUCPDC is used for classifier training, whereas GSM is 64.8% (κ = 0.460). The receiver operating characteristic curves were produced to test the effectiveness of AUCPDC and GSM for the identification of echolucent plaques. The area under the curve (AUC) was 0.817 when AUCPDC was used for training the classifier, which is higher than that achieved using GSM (AUC = 0.746). Compared with GSM, the AUCPDC showed a borderline association with coronary heart disease (Spearman r = 0.234, p = 0.050). Our experimental results suggest that AUCPDC analysis is a promising method for evaluation of plaque echogenicity and predicting cardiovascular events in patients with plaques.

  13. Maximum Entropy Principle for Transportation

    NASA Astrophysics Data System (ADS)

    Bilich, F.; DaSilva, R.

    2008-11-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  14. In-situ observations of a bi-modal ion distribution in the outer coma of comet P/Halley

    NASA Technical Reports Server (NTRS)

    Thomsen, M. F.; Feldman, W. C.; Wilken, B.; Jockers, K.; Stuedemann, W.

    1987-01-01

    Observations obtained by the Johnstone Plasma Analyzer on the Giotto fly-by of comet Halley showed a fairly sudden decrease in the count rate of energetic (about 30 KeV) water-group ions inside about 500,000 km from the nucleus. This decrease was accompanied by the appearance of a new water-group ion population at slightly lower energies (less than 10 KeV). Close inspection reveals that this lower-energy peak was also present somewhat earlier in the postshock flow but only became prominent near the sudden transition just described. It is shown that the observed bimodal ion distribution is well explained in terms of the velocity history of the accreting solar wind flow in the outer coma. The decline in count rate of the energetic pick-up distribution is due to a relatively sudden slowing of the bulk flow there and not to a loss of particles. Hence, charge-exchange cooling of the flow is probably not important at these distances from the nucleus. The observations suggest that pitch-angle scattering is fairly efficient at least after the bow shock, but that energy diffusion is probably not very efficient.

  15. Application of Probabilistic Methods for the Determination of an Economically Robust HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.

    1996-01-01

    This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.

  16. Investigation of Sunspot Area Varying with Sunspot Number

    NASA Astrophysics Data System (ADS)

    Li, K. J.; Li, F. Y.; Zhang, J.; Feng, W.

    2016-11-01

    The statistical relationship between sunspot area (SA) and sunspot number (SN) is investigated through analysis of their daily observation records from May 1874 to April 2015. For a total of 1607 days, representing 3 % of the total interval considered, either SA or SN had a value of zero while the other parameter did not. These occurrences most likely reflect the report of short-lived spots by a single observatory and subsequent averaging of zero values over multiple stations. The main results obtained are as follows: i) The number of spotless days around the minimum of a solar cycle is statistically negatively correlated with the maximum strength of solar activity of that cycle. ii) The probability distribution of SA generally decreases monotonically with SA, but the distribution of SN generally increases first, then it decreases as a whole. The different probability distribution of SA and SN should strengthen their non-linear relation, and the correction factor [k] in the definition of SN may be one of the factors that cause the non-linearity. iii) The non-linear relation of SA and SN indeed exists statistically, and it is clearer during the maximum epoch of a solar cycle.

  17. An exactly solvable coarse-grained model for species diversity

    NASA Astrophysics Data System (ADS)

    Suweis, Samir; Rinaldo, Andrea; Maritan, Amos

    2012-07-01

    We present novel analytical results concerning ecosystem species diversity that stem from a proposed coarse-grained neutral model based on birth-death processes. The relevance of the problem lies in the urgency for understanding and synthesizing both theoretical results from ecological neutral theory and empirical evidence on species diversity preservation. The neutral model of biodiversity deals with ecosystems at the same trophic level, where per capita vital rates are assumed to be species independent. Closed-form analytical solutions for the neutral theory are obtained within a coarse-grained model, where the only input is the species persistence time distribution. Our results pertain to: the probability distribution function of the number of species in the ecosystem, both in transient and in stationary states; the n-point connected time correlation function; and the survival probability, defined as the distribution of time spans to local extinction for a species randomly sampled from the community. Analytical predictions are also tested on empirical data from an estuarine fish ecosystem. We find that emerging properties of the ecosystem are very robust and do not depend on specific details of the model, with implications for biodiversity and conservation biology.

  18. Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction

    NASA Technical Reports Server (NTRS)

    Cohen, A. C.

    1971-01-01

    A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.

  19. Assessing hail risk for a building portfolio by generating stochastic events

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Choffet, Marc; Demierre, Jonathan; Imhof, Markus; Jaboyedoff, Michel; Nguyen, Liliane; Voumard, Jérémie

    2015-04-01

    Among the natural hazards affecting buildings, hail is one of the most costly and is nowadays a major concern for building insurance companies. In Switzerland, several costly events were reported these last years, among which the July 2011 event, which cost around 125 million EUR to the Aargauer public insurance company (North-western Switzerland). This study presents the new developments in a stochastic model which aims at evaluating the risk for a building portfolio. Thanks to insurance and meteorological radar data of the 2011 Aargauer event, vulnerability curves are proposed by comparing the damage rate to the radar intensity (i.e. the maximum hailstone size reached during the event, deduced from the radar signal). From these data, vulnerability is defined by a two-step process. The first step defines the probability for a building to be affected (i.e. to claim damages), while the second, if the building is affected, attributes a damage rate to the building from a probability distribution specific to the intensity class. To assess the risk, stochastic events are then generated by summing a set of Gaussian functions with 6 random parameters (X and Y location, maximum hailstone size, standard deviation, eccentricity and orientation). The location of these functions is constrained by a general event shape and by the position of the previously defined functions of the same event. For each generated event, the total cost is calculated in order to obtain a distribution of event costs. The general events parameters (shape, size, …) as well as the distribution of the Gaussian parameters are inferred from two radar intensity maps, namely the one of the aforementioned event, and a second from an event which occurred in 2009. After a large number of simulations, the hailstone size distribution obtained in different regions is compared to the distribution inferred from pre-existing hazard maps, built from a larger set of radar data. The simulation parameters are then adjusted by trial and error, in order to get the best reproduction of the expected distributions. The value of the mean annual risk obtained using the model is also compared to the mean annual risk calculated using directly the hazard maps. According to the first results, the return period of an event inducing a total damage cost equal or greater than 125 million EUR for the Aargauer insurance company would be of around 10 to 40 years.

  20. Chaotic density fluctuations in L-mode plasmas of the DIII-D tokamak

    DOE PAGES

    Maggs, J. E.; Rhodes, Terry L.; Morales, G. J.

    2015-03-05

    Analysis of the time series obtained with the Doppler backscattering system (DBS) in the DIII-D tokamak shows that intermediate wave number plasma density fluctuations in low confinement (L-mode) tokamak plasmas are chaotic. Here, the supporting evidence is based on the shape of the power spectrum; the location of the signal in the complexity-entropy plane (C-H plane); and the population of the corresponding Bandt-Pompe probability distributions.

  1. Anomalous Diffusion Approximation of Risk Processes in Operational Risk of Non-Financial Corporations

    NASA Astrophysics Data System (ADS)

    Magdziarz, M.; Mista, P.; Weron, A.

    2007-05-01

    We introduce an approximation of the risk processes by anomalous diffusion. In the paper we consider the case, where the waiting times between successive occurrences of the claims belong to the domain of attraction of alpha -stable distribution. The relationship between the obtained approximation and the celebrated fractional diffusion equation is emphasised. We also establish upper bounds for the ruin probability in the considered model and give some numerical examples.

  2. Analysis of Nondeterministic Search Patterns for Minimization of UAV Counter-Targeting

    DTIC Science & Technology

    2013-03-01

    Defense System NPS Naval Postgraduate School PDF Probability Distribution Function SLS Sea Level Standard UAV Unmanned Aerial Vehicle UAS Unmanned Aerial... intelli - gence regarding a target’s position is obtained, or when contact on a known target is lost. The shape of the AOU is often circular or elliptical...exceed this artificial boundary, ensuring that the searcher will never violate the actual area boundary. This constraint still enables the searcher to

  3. Diffusion of oxygen in cork.

    PubMed

    Lequin, Sonia; Chassagne, David; Karbowiak, Thomas; Simon, Jean-Marc; Paulin, Christian; Bellat, Jean-Pierre

    2012-04-04

    This work reports measurements of effective oxygen diffusion coefficient in raw cork. Kinetics of oxygen transfer through cork is studied at 298 K thanks to a homemade manometric device composed of two gas compartments separated by a cork wafer sample. The first compartment contains oxygen, whereas the second one is kept under dynamic vacuum. The pressure decrease in the first compartment is recorded as a function of time. The effective diffusion coefficient D(eff) is obtained by applying Fick's law to transient state using a numerical method based on finite differences. An analytical model derived from Fick's law applied to steady state is also proposed. Results given by these two methods are in close agreement with each other. The harmonic average of the effective diffusion coefficients obtained from the distribution of 15 cork wafers of 3 mm thickness is 1.1 × 10(-9) m(2) s(-1) with a large distribution over four decades. The statistical analysis of the Gaussian distribution obtained on a 3 mm cork wafer is extrapolated to a 48 mm cork wafer, which length corresponds to a full cork stopper. In this case, the probability density distribution gives a mean value of D(eff) equal to 1.6 × 10(-9) m(2) s(-1). This result shows that it is possible to obtain the effective diffusion coefficient of oxygen through cork from short time (few days) measurements performed on a thin cork wafer, whereas months are required to obtain the diffusion coefficient for a full cork stopper. Permeability and oxygen transfer rate are also calculated for comparison with data from other studies.

  4. Robust state transfer in the quantum spin channel via weak measurement and quantum measurement reversal

    NASA Astrophysics Data System (ADS)

    He, Zhi; Yao, Chunmei; Zou, Jian

    2013-10-01

    Using the weak measurement (WM) and quantum measurement reversal (QMR) approach, robust state transfer and entanglement distribution can be realized in the spin-(1)/(2) Heisenberg chain. We find that the ultrahigh fidelity and long distance of quantum state transfer with certain success probability can be obtained using proper WM and QMR, i.e., the average fidelity of a general pure state from 80% to almost 100%, which is almost size independent. We also find that the distance and quality of entanglement distribution for the Bell state and the general Werner mixed state can be obviously improved by the WM and QMR approach.

  5. Relative mass distributions of neutron-rich thermally fissile nuclei within a statistical model

    NASA Astrophysics Data System (ADS)

    Kumar, Bharat; Kannan, M. T. Senthil; Balasubramaniam, M.; Agrawal, B. K.; Patra, S. K.

    2017-09-01

    We study the binary mass distribution for the recently predicted thermally fissile neutron-rich uranium and thorium nuclei using a statistical model. The level density parameters needed for the study are evaluated from the excitation energies of the temperature-dependent relativistic mean field formalism. The excitation energy and the level density parameter for a given temperature are employed in the convolution integral method to obtain the probability of the particular fragmentation. As representative cases, we present the results for the binary yields of 250U and 254Th. The relative yields are presented for three different temperatures: T =1 , 2, and 3 MeV.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  7. Distribution of G concurrence of random pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2006-12-15

    The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.

  8. Population dynamical behavior of Lotka-Volterra system under regime switching

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyue; Jiang, Daqing; Mao, Xuerong

    2009-10-01

    In this paper, we investigate a Lotka-Volterra system under regime switching where B(t) is a standard Brownian motion. The aim here is to find out what happens under regime switching. We first obtain the sufficient conditions for the existence of global positive solutions, stochastic permanence and extinction. We find out that both stochastic permanence and extinction have close relationships with the stationary probability distribution of the Markov chain. The limit of the average in time of the sample path of the solution is then estimated by two constants related to the stationary distribution and the coefficients. Finally, the main results are illustrated by several examples.

  9. A numerical 4D Collision Risk Model

    NASA Astrophysics Data System (ADS)

    Schmitt, Pal; Culloch, Ross; Lieber, Lilian; Kregting, Louise

    2017-04-01

    With the growing number of marine renewable energy (MRE) devices being installed across the world, some concern has been raised about the possibility of harming mobile, marine fauna by collision. Although physical contact between a MRE device and an organism has not been reported to date, these novel sub-sea structures pose a challenge for accurately estimating collision risks as part of environmental impact assessments. Even if the animal motion is simplified to linear translation, ignoring likely evasive behaviour, the mathematical problem of establishing an impact probability is not trivial. We present a numerical algorithm to obtain such probability distributions using transient, four-dimensional simulations of a novel marine renewable device concept, Deep Green, Minesto's power plant and hereafter referred to as the 'kite' that flies in a figure-of-eight configuration. Simulations were carried out altering several configurations including kite depth, kite speed and kite trajectory while keeping the speed of the moving object constant. Since the kite assembly is defined as two parts in the model, a tether (attached to the seabed) and the kite, collision risk of each part is reported independently. By comparing the number of collisions with the number of collision-free simulations, a probability of impact for each simulated position in the cross- section of the area is considered. Results suggest that close to the bottom, where the tether amplitude is small, the path is always blocked and the impact probability is 100% as expected. However, higher up in the water column, the collision probability is twice as high in the mid line, where the tether passes twice per period than at the extremes of its trajectory. The collision probability distribution is much more complex in the upper end of the water column, where the kite and tether can simultaneously collide with the object. Results demonstrate the viability of such models, which can also incorporate empirical field data for assessing the probability of collision risk of animals with an MRE device under varying operating conditions.

  10. Past, present and future distributions of an Iberian Endemic, Lepus granatensis: ecological and evolutionary clues from species distribution models.

    PubMed

    Acevedo, Pelayo; Melo-Ferreira, José; Real, Raimundo; Alves, Paulo Célio

    2012-01-01

    The application of species distribution models (SDMs) in ecology and conservation biology is increasing and assuming an important role, mainly because they can be used to hindcast past and predict current and future species distributions. However, the accuracy of SDMs depends on the quality of the data and on appropriate theoretical frameworks. In this study, comprehensive data on the current distribution of the Iberian hare (Lepus granatensis) were used to i) determine the species' ecogeographical constraints, ii) hindcast a climatic model for the last glacial maximum (LGM), relating it to inferences derived from molecular studies, and iii) calibrate a model to assess the species future distribution trends (up to 2080). Our results showed that the climatic factor (in its pure effect and when it is combined with the land-cover factor) is the most important descriptor of the current distribution of the Iberian hare. In addition, the model's output was a reliable index of the local probability of species occurrence, which is a valuable tool to guide species management decisions and conservation planning. Climatic potential obtained for the LGM was combined with molecular data and the results suggest that several glacial refugia may have existed for the species within the major Iberian refugium. Finally, a high probability of occurrence of the Iberian hare in the current species range and a northward expansion were predicted for future. Given its current environmental envelope and evolutionary history, we discuss the macroecology of the Iberian hare and its sensitivity to climate change.

  11. Past, Present and Future Distributions of an Iberian Endemic, Lepus granatensis: Ecological and Evolutionary Clues from Species Distribution Models

    PubMed Central

    Acevedo, Pelayo; Melo-Ferreira, José; Real, Raimundo; Alves, Paulo Célio

    2012-01-01

    The application of species distribution models (SDMs) in ecology and conservation biology is increasing and assuming an important role, mainly because they can be used to hindcast past and predict current and future species distributions. However, the accuracy of SDMs depends on the quality of the data and on appropriate theoretical frameworks. In this study, comprehensive data on the current distribution of the Iberian hare (Lepus granatensis) were used to i) determine the species’ ecogeographical constraints, ii) hindcast a climatic model for the last glacial maximum (LGM), relating it to inferences derived from molecular studies, and iii) calibrate a model to assess the species future distribution trends (up to 2080). Our results showed that the climatic factor (in its pure effect and when it is combined with the land-cover factor) is the most important descriptor of the current distribution of the Iberian hare. In addition, the model’s output was a reliable index of the local probability of species occurrence, which is a valuable tool to guide species management decisions and conservation planning. Climatic potential obtained for the LGM was combined with molecular data and the results suggest that several glacial refugia may have existed for the species within the major Iberian refugium. Finally, a high probability of occurrence of the Iberian hare in the current species range and a northward expansion were predicted for future. Given its current environmental envelope and evolutionary history, we discuss the macroecology of the Iberian hare and its sensitivity to climate change. PMID:23272115

  12. Distribution of distances between DNA barcode labels in nanochannels close to the persistence length

    NASA Astrophysics Data System (ADS)

    Reinhart, Wesley F.; Reifenberger, Jeff G.; Gupta, Damini; Muralidhar, Abhiram; Sheats, Julian; Cao, Han; Dorfman, Kevin D.

    2015-02-01

    We obtained experimental extension data for barcoded E. coli genomic DNA molecules confined in nanochannels from 40 nm to 51 nm in width. The resulting data set consists of 1 627 779 measurements of the distance between fluorescent probes on 25 407 individual molecules. The probability density for the extension between labels is negatively skewed, and the magnitude of the skewness is relatively insensitive to the distance between labels. The two Odijk theories for DNA confinement bracket the mean extension and its variance, consistent with the scaling arguments underlying the theories. We also find that a harmonic approximation to the free energy, obtained directly from the probability density for the distance between barcode labels, leads to substantial quantitative error in the variance of the extension data. These results suggest that a theory for DNA confinement in such channels must account for the anharmonic nature of the free energy as a function of chain extension.

  13. Mapping Wildfire Ignition Probability Using Sentinel 2 and LiDAR (Jerte Valley, Cáceres, Spain)

    PubMed Central

    Sánchez Sánchez, Yolanda; Mateos Picado, Marina

    2018-01-01

    Wildfire is a major threat to the environment, and this threat is aggravated by different climatic and socioeconomic factors. The availability of detailed, reliable mapping and periodic and immediate updates makes wildfire prevention and extinction work more effective. An analyst protocol has been generated that allows the precise updating of high-resolution thematic maps. For this protocol, images obtained through the Sentinel 2A satellite, with a return time of five days, have been merged with Light Detection and Ranging (LiDAR) data with a density of 0.5 points/m2 in order to obtain vegetation mapping with an accuracy of 88% (kappa = 0.86), which is then extrapolated to fuel model mapping through a decision tree. This process, which is fast and reliable, serves as a cartographic base for the later calculation of ignition-probability mapping. The generated cartography is a fundamental tool to be used in the decision making involved in the planning of preventive silvicultural treatments, extinguishing media distribution, infrastructure construction, etc. PMID:29522460

  14. Exact combinatorial approach to finite coagulating systems

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr

    2018-02-01

    This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.

  15. Application of a Probalistic Sizing Methodology for Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Rancurel, Michael; Behar-Lafenetre, Stephanie; Cornillon, Laurence; Leroy, Francois-Henri; Coe, Graham; Laine, Benoit

    2012-07-01

    Ceramics are increasingly used in the space industry to take advantage of their stability and high specific stiffness properties. Their brittle behaviour often leads to size them by increasing the safety factors that are applied on the maximum stresses. It induces to oversize the structures. This is inconsistent with the major driver in space architecture, the mass criteria. This paper presents a methodology to size ceramic structures based on their failure probability. Thanks to failure tests on samples, the Weibull law which characterizes the strength distribution of the material is obtained. A-value (Q0.0195%) and B-value (Q0.195%) are then assessed to take into account the limited number of samples. A knocked-down Weibull law that interpolates the A- & B- values is also obtained. Thanks to these two laws, a most-likely and a knocked- down prediction of failure probability are computed for complex ceramic structures. The application of this methodology and its validation by test is reported in the paper.

  16. Mapping Wildfire Ignition Probability Using Sentinel 2 and LiDAR (Jerte Valley, Cáceres, Spain).

    PubMed

    Sánchez Sánchez, Yolanda; Martínez-Graña, Antonio; Santos Francés, Fernando; Mateos Picado, Marina

    2018-03-09

    Wildfire is a major threat to the environment, and this threat is aggravated by different climatic and socioeconomic factors. The availability of detailed, reliable mapping and periodic and immediate updates makes wildfire prevention and extinction work more effective. An analyst protocol has been generated that allows the precise updating of high-resolution thematic maps. For this protocol, images obtained through the Sentinel 2A satellite, with a return time of five days, have been merged with Light Detection and Ranging (LiDAR) data with a density of 0.5 points/m² in order to obtain vegetation mapping with an accuracy of 88% (kappa = 0.86), which is then extrapolated to fuel model mapping through a decision tree. This process, which is fast and reliable, serves as a cartographic base for the later calculation of ignition-probability mapping. The generated cartography is a fundamental tool to be used in the decision making involved in the planning of preventive silvicultural treatments, extinguishing media distribution, infrastructure construction, etc.

  17. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.

  18. A Physically-Based and Distributed Tool for Modeling the Hydrological and Mechanical Processes of Shallow Landslides

    NASA Astrophysics Data System (ADS)

    Arnone, E.; Noto, L. V.; Dialynas, Y. G.; Caracciolo, D.; Bras, R. L.

    2015-12-01

    This work presents the capabilities of a model, i.e. the tRIBS-VEGGIE-Landslide, in two different versions, i.e. developed within a probabilistic framework and coupled with a root cohesion module. The probabilistic model treats geotechnical and soil retention curve parameters as random variables across the basin and estimates theoretical probability distributions of slope stability and the associated "factor of safety" commonly used to describe the occurrence of shallow landslides. The derived distributions are used to obtain the spatio-temporal dynamics of probability of failure, conditioned on soil moisture dynamics at each watershed location. The framework has been tested in the Luquillo Experimental Forest (Puerto Rico) where shallow landslides are common. In particular, the methodology was used to evaluate how the spatial and temporal patterns of precipitation, whose variability is significant over the basin, affect the distribution of probability of failure. Another version of the model accounts for the additional cohesion exerted by vegetation roots. The approach is to use the Fiber Bundle Model (FBM) framework that allows for the evaluation of the root strength as a function of the stress-strain relationships of bundles of fibers. The model requires the knowledge of the root architecture to evaluate the additional reinforcement from each root diameter class. The root architecture is represented with a branching topology model based on Leonardo's rule. The methodology has been tested on a simple case study to explore the role of both hydrological and mechanical root effects. Results demonstrate that the effects of root water uptake can at times be more significant than the mechanical reinforcement; and that the additional resistance provided by roots depends heavily on the vegetation root structure and length.

  19. A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.

    2011-11-02

    Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less

  20. Probability Distribution of Turbulent Kinetic Energy Dissipation Rate in Ocean: Observations and Approximations

    NASA Astrophysics Data System (ADS)

    Lozovatsky, I.; Fernando, H. J. S.; Planella-Morato, J.; Liu, Zhiyu; Lee, J.-H.; Jinadasa, S. U. P.

    2017-10-01

    The probability distribution of turbulent kinetic energy dissipation rate in stratified ocean usually deviates from the classic lognormal distribution that has been formulated for and often observed in unstratified homogeneous layers of atmospheric and oceanic turbulence. Our measurements of vertical profiles of micro-scale shear, collected in the East China Sea, northern Bay of Bengal, to the south and east of Sri Lanka, and in the Gulf Stream region, show that the probability distributions of the dissipation rate ɛ˜r in the pycnoclines (r ˜ 1.4 m is the averaging scale) can be successfully modeled by the Burr (type XII) probability distribution. In weakly stratified boundary layers, lognormal distribution of ɛ˜r is preferable, although the Burr is an acceptable alternative. The skewness Skɛ and the kurtosis Kɛ of the dissipation rate appear to be well correlated in a wide range of Skɛ and Kɛ variability.

  1. Interacting damage models mapped onto ising and percolation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toussaint, Renaud; Pride, Steven R.

    The authors introduce a class of damage models on regular lattices with isotropic interactions between the broken cells of the lattice. Quasistatic fiber bundles are an example. The interactions are assumed to be weak, in the sense that the stress perturbation from a broken cell is much smaller than the mean stress in the system. The system starts intact with a surface-energy threshold required to break any cell sampled from an uncorrelated quenched-disorder distribution. The evolution of this heterogeneous system is ruled by Griffith's principle which states that a cell breaks when the release in potential (elastic) energy in themore » system exceeds the surface-energy barrier necessary to break the cell. By direct integration over all possible realizations of the quenched disorder, they obtain the probability distribution of each damage configuration at any level of the imposed external deformation. They demonstrate an isomorphism between the distributions so obtained and standard generalized Ising models, in which the coupling constants and effective temperature in the Ising model are functions of the nature of the quenched-disorder distribution and the extent of accumulated damage. In particular, they show that damage models with global load sharing are isomorphic to standard percolation theory, that damage models with local load sharing rule are isomorphic to the standard ising model, and draw consequences thereof for the universality class and behavior of the autocorrelation length of the breakdown transitions corresponding to these models. they also treat damage models having more general power-law interactions, and classify the breakdown process as a function of the power-law interaction exponent. Last, they also show that the probability distribution over configurations is a maximum of Shannon's entropy under some specific constraints related to the energetic balance of the fracture process, which firmly relates this type of quenched-disorder based damage model to standard statistical mechanics.« less

  2. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  3. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  4. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  5. Deep Learning Role in Early Diagnosis of Prostate Cancer

    PubMed Central

    Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman

    2018-01-01

    The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518

  6. Grading system to categorize breast MRI using BI-RADS 5th edition: a statistical study of non-mass enhancement descriptors in terms of probability of malignancy.

    PubMed

    Asada, Tatsunori; Yamada, Takayuki; Kanemaki, Yoshihide; Fujiwara, Keishi; Okamoto, Satoko; Nakajima, Yasuo

    2018-03-01

    To analyze the association of breast non-mass enhancement descriptors in the BI-RADS 5th edition with malignancy, and to establish a grading system and categorization of descriptors. This study was approved by our institutional review board. A total of 213 patients were enrolled. Breast MRI was performed with a 1.5-T MRI scanner using a 16-channel breast radiofrequency coil. Two radiologists determined internal enhancement and distribution of non-mass enhancement by consensus. Corresponding pathologic diagnoses were obtained by either biopsy or surgery. The probability of malignancy by descriptor was analyzed using Fisher's exact test and multivariate logistic regression analysis. The probability of malignancy by category was analyzed using Fisher's exact and multi-group comparison tests. One hundred seventy-eight lesions were malignant. Multivariate model analysis showed that internal enhancement (homogeneous vs others, p < 0.001, heterogeneous and clumped vs clustered ring, p = 0.003) and distribution (focal and linear vs segmental, p < 0.001) were the significant explanatory variables. The descriptors were classified into three grades of suspicion, and the categorization (3, 4A, 4B, 4C, and 5) by sum-up grades showed an incremental increase in the probability of malignancy (p < 0.0001). The three-grade criteria and categorization by sum-up grades of descriptors appear valid for non-mass enhancement.

  7. Application of a time probabilistic approach to seismic landslide hazard estimates in Iran

    NASA Astrophysics Data System (ADS)

    Rajabi, A. M.; Del Gaudio, V.; Capolongo, D.; Khamehchiyan, M.; Mahdavifar, M. R.

    2009-04-01

    Iran is a country located in a tectonic active belt and is prone to earthquake and related phenomena. In the recent years, several earthquakes caused many fatalities and damages to facilities, e.g. the Manjil (1990), Avaj (2002), Bam (2003) and Firuzabad-e-Kojur (2004) earthquakes. These earthquakes generated many landslides. For instance, catastrophic landslides triggered by the Manjil Earthquake (Ms = 7.7) in 1990 buried the village of Fatalak, killed more than 130 peoples and cut many important road and other lifelines, resulting in major economic disruption. In general, earthquakes in Iran have been concentrated in two major zones with different seismicity characteristics: one is the region of Alborz and Central Iran and the other is the Zagros Orogenic Belt. Understanding where seismically induced landslides are most likely to occur is crucial in reducing property damage and loss of life in future earthquakes. For this purpose a time probabilistic approach for earthquake-induced landslide hazard at regional scale, proposed by Del Gaudio et al. (2003), has been applied to the whole Iranian territory to provide the basis of hazard estimates. This method consists in evaluating the recurrence of seismically induced slope failure conditions inferred from the Newmark's model. First, by adopting Arias Intensity to quantify seismic shaking and using different Arias attenuation relations for Alborz - Central Iran and Zagros regions, well-established methods of seismic hazard assessment, based on the Cornell (1968) method, were employed to obtain the occurrence probabilities for different levels of seismic shaking in a time interval of interest (50 year). Then, following Jibson (1998), empirical formulae specifically developed for Alborz - Central Iran and Zagros, were used to represent, according to the Newmark's model, the relation linking Newmark's displacement Dn to Arias intensity Ia and to slope critical acceleration ac. These formulae were employed to evaluate the slope critical acceleration (Ac)x for which a prefixed probability exists that seismic shaking would result in a Dn value equal to a threshold x whose exceedence would cause landslide triggering. The obtained ac values represent the minimum slope resistance required to keep the probability of seismic-landslide triggering within the prefixed value. In particular we calculated the spatial distribution of (Ac)x for x thresholds of 10 and 2 cm in order to represent triggering conditions for coherent slides (e.g., slumps, block slides, slow earth flows) and disrupted slides (e.g., rock falls, rock slides, rock avalanches), respectively. Then we produced a probabilistic national map that shows the spatial distribution of (Ac)10 and (Ac)2, for a 10% probability of exceedence in 50 year, which is a significant level of hazard equal to that commonly used for building codes. The spatial distribution of the calculated (Ac)xvalues can be compared with the in situ actual ac values of specific slopes to estimate whether these slopes have a significant probability of failing under seismic action in the future. As example of possible application of this kind of time probabilistic map to hazard estimates, we compared the values obtained for the Manjil region with a GIS map providing spatial distribution of estimated ac values in the same region. The spatial distribution of slopes characterized by ac < (Ac)10 was then compared with the spatial distribution of the major landslides of coherent type triggered by the Manjil earthquake. This comparison provides indications on potential, problems and limits of the experimented approach for the study area. References Cornell, C.A., 1968: Engineering seismic risk analysis, Bull. Seism. Soc. Am., 58, 1583-1606. Del Gaudio V., Wasowski J., & Pierri P., 2003: An approach to time probabilistic evaluation of seismically-induced landslide hazard. Bull Seism. Soc. Am., 93, 557-569. Jibson, R.W., E.L. Harp and J.A. Michael, 1998: A method for producing digital probabilistic seismic landslide hazard maps: an example from the Los Angeles, California, area, U.S. Geological Survey Open-File Report 98-113, Golden, Colorado, 17 pp.

  8. Knock probability estimation through an in-cylinder temperature model with exogenous noise

    NASA Astrophysics Data System (ADS)

    Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.

    2018-01-01

    This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.

  9. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  10. Evaluation of the Three Parameter Weibull Distribution Function for Predicting Fracture Probability in Composite Materials

    DTIC Science & Technology

    1978-03-01

    for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented

  11. Microwave inversion of leaf area and inclination angle distributions from backscattered data

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Saleh, H. A.

    1985-01-01

    The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.

  12. qPR: An adaptive partial-report procedure based on Bayesian inference.

    PubMed

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-08-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.

  13. Probabilistic tsunami hazard assessment in Greece for seismic sources along the segmented Hellenic Arc

    NASA Astrophysics Data System (ADS)

    Novikova, Tatyana; Babeyko, Andrey; Papadopoulos, Gerassimos

    2017-04-01

    Greece and adjacent coastal areas are characterized by a high population exposure to tsunami hazard. The Hellenic Arc is the most active geotectonic structure for the generation of earthquakes and tsunamis. We performed probabilistic tsunami hazard assessment for selected locations of Greek coastlines which are the forecasting points officially used in the tsunami warning operations by the Hellenic National Tsunami Warning Center and the NEAMTWS/IOC/UNESCO. In our analysis we considered seismic sources for tsunami generation along the western, central and eastern segments of the Hellenic Arc. We first created a synthetic catalog as long as 10,000 years for all the significant earthquakes with magnitudes in the range from 6.0 to 8.5, the real events being included in this catalog. For each event included in the synthetic catalog a tsunami was generated and propagated using Boussinesq model. The probability of occurrence for each event was determined by Gutenberg-Richter magnitude-frequency distribution. The results of our study are expressed as hazard curves and hazard maps. The hazard curves were obtained for the selected sites and present the annual probability of exceedance as a function of pick coastal tsunami amplitude. Hazard maps represent the distribution of peak coastal tsunami amplitudes corresponding to a fixed annual probability. In such forms our results can be easily compared to the ones obtained in other studies and further employed for the development of tsunami risk management plans. This research is a contribution to the EU-FP7 tsunami research project ASTARTE (Assessment, Strategy And Risk Reduction for Tsunamis in Europe), grant agreement no: 603839, 2013-10-30.

  14. HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps

    NASA Astrophysics Data System (ADS)

    Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.

    2017-01-01

    We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.

  15. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  16. Microscopic description of pair transfer between two superfluid Fermi systems: Combining phase-space averaging and combinatorial techniques

    NASA Astrophysics Data System (ADS)

    Regnier, David; Lacroix, Denis; Scamps, Guillaume; Hashimoto, Yukio

    2018-03-01

    In a mean-field description of superfluidity, particle number and gauge angle are treated as quasiclassical conjugated variables. This level of description was recently used to describe nuclear reactions around the Coulomb barrier. Important effects of the relative gauge angle between two identical superfluid nuclei (symmetric collisions) on transfer probabilities and fusion barrier have been uncovered. A theory making contact with experiments should at least average over different initial relative gauge-angles. In the present work, we propose a new approach to obtain the multiple pair transfer probabilities between superfluid systems. This method, called phase-space combinatorial (PSC) technique, relies both on phase-space averaging and combinatorial arguments to infer the full pair transfer probability distribution at the cost of multiple mean-field calculations only. After benchmarking this approach in a schematic model, we apply it to the collision 20O+20O at various energies below the Coulomb barrier. The predictions for one pair transfer are similar to results obtained with an approximated projection method, whereas significant differences are found for two pairs transfer. Finally, we investigated the applicability of the PSC method to the contact between nonidentical superfluid systems. A generalization of the method is proposed and applied to the schematic model showing that the pair transfer probabilities are reasonably reproduced. The applicability of the PSC method to asymmetric nuclear collisions is investigated for the 14O+20O collision and it turns out that unrealistically small single- and multiple pair transfer probabilities are obtained. This is explained by the fact that relative gauge angle play in this case a minor role in the particle transfer process compared to other mechanisms, such as equilibration of the charge/mass ratio. We conclude that the best ground for probing gauge-angle effects in nuclear reaction and/or for applying the proposed PSC approach on pair transfer is the collisions of identical open-shell spherical nuclei.

  17. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  18. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  19. The statistics of Pearce element diagrams and the Chayes closure problem

    NASA Astrophysics Data System (ADS)

    Nicholls, J.

    1988-05-01

    Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.

  20. Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)

    NASA Astrophysics Data System (ADS)

    Peters, Christina; Malz, Alex; Hlozek, Renée

    2018-01-01

    The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.

  1. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  2. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    USGS Publications Warehouse

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.

  3. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  4. Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension

    NASA Astrophysics Data System (ADS)

    Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek

    2018-04-01

    We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.

  5. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  6. Average BER of subcarrier intensity modulated free space optical systems over the exponentiated Weibull fading channels.

    PubMed

    Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang

    2014-08-25

    The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.

  7. Reaction-diffusion on the fully-connected lattice: A+A\\rightarrow A

    NASA Astrophysics Data System (ADS)

    Turban, Loïc; Fortin, Jean-Yves

    2018-04-01

    Diffusion-coagulation can be simply described by a dynamic where particles perform a random walk on a lattice and coalesce with probability unity when meeting on the same site. Such processes display non-equilibrium properties with strong fluctuations in low dimensions. In this work we study this problem on the fully-connected lattice, an infinite-dimensional system in the thermodynamic limit, for which mean-field behaviour is expected. Exact expressions for the particle density distribution at a given time and survival time distribution for a given number of particles are obtained. In particular, we show that the time needed to reach a finite number of surviving particles (vanishing density in the scaling limit) displays strong fluctuations and extreme value statistics, characterized by a universal class of non-Gaussian distributions with singular behaviour.

  8. Data documenting the potential distribution of Aedes aegypti in the center of Veracruz, Mexico.

    PubMed

    Estrada-Contreras, Israel; Sandoval-Ruiz, César A; Mendoza-Palmero, Fredy S; Ibáñez-Bernal, Sergio; Equihua, Miguel; Benítez, Griselda

    2017-02-01

    The data presented in this article are related to the research article entitled "Establishment of Aedes aegypti (L.) in mountainous regions in Mexico: Increasing number of population at risk of mosquito-borne disease and future climate conditions" (M. Equihua, S. Ibáñez-Bernal, G. Benítez, I. Estrada-Contreras, C.A. Sandoval-Ruiz, F.S. Mendoza-Palmero, 2016) [1]. This article provides presence records in shapefile format used to generate maps of potential distribution of Aedes aegypti with different climate change scenarios as well as each of the maps obtained in raster format. In addition, tables with values of potential distribution of the vector as well as the average values of probability of presence including data of the mosquito incidence along the altitudinal range.

  9. The variance of dispersion measure of high-redshift transient objects as a probe of ionized bubble size during reionization

    NASA Astrophysics Data System (ADS)

    Yoshiura, Shintaro; Takahashi, Keitaro

    2018-01-01

    The dispersion measure (DM) of high-redshift (z ≳ 6) transient objects such as fast radio bursts can be a powerful tool to probe the intergalactic medium during the Epoch of Reionization. In this paper, we study the variance of the DMs of objects with the same redshift as a potential probe of the size distribution of ionized bubbles. We calculate the DM variance with a simple model with randomly distributed spherical bubbles. It is found that the DM variance reflects the characteristics of the probability distribution of the bubble size. We find that the variance can be measured precisely enough to obtain the information on the typical size with a few hundred sources at a single redshift.

  10. Bayesian assessment of uncertainty in aerosol size distributions and index of refraction retrieved from multiwavelength lidar measurements.

    PubMed

    Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir

    2008-04-01

    We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.

  11. Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations

    NASA Astrophysics Data System (ADS)

    Kozak, P.

    2014-12-01

    Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.

  12. The Impact of an Instructional Intervention Designed to Support Development of Stochastic Understanding of Probability Distribution

    ERIC Educational Resources Information Center

    Conant, Darcy Lynn

    2013-01-01

    Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…

  13. Bayesian network models for error detection in radiotherapy plans

    NASA Astrophysics Data System (ADS)

    Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.

    2015-04-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  14. Ensemble learning and model averaging for material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.

    2017-05-01

    In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.

  15. Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks.

    PubMed

    Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue

    2014-11-01

    Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011)PLEEE81539-375510.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011)JMBLAJ0303-681210.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.

  16. Probability distributions of the electroencephalogram envelope of preterm infants.

    PubMed

    Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro

    2015-06-01

    To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Wave and pseudo-diffusion equations from squeezed states

    NASA Technical Reports Server (NTRS)

    Daboul, Jamil

    1993-01-01

    We show that the probability distributions P(sub n)(q,p;y) := the absolute value squared of (n(p,q;y), which are obtained from squeezed states, obey an interesting partial differential equation, to which we give two intuitive interpretations: as a wave equation in one space dimension; and as a pseudo-diffusion equation. We also study the corresponding Wehrl entropies S(sub n)(y), and we show that they have minima at zero squeezing, y = 0.

  18. Supersymmetric quantum mechanics method for the Fokker-Planck equation with applications to protein folding dynamics

    NASA Astrophysics Data System (ADS)

    Polotto, Franciele; Drigo Filho, Elso; Chahine, Jorge; Oliveira, Ronaldo Junio de

    2018-03-01

    This work developed analytical methods to explore the kinetics of the time-dependent probability distributions over thermodynamic free energy profiles of protein folding and compared the results with simulation. The Fokker-Planck equation is mapped onto a Schrödinger-type equation due to the well-known solutions of the latter. Through a semi-analytical description, the supersymmetric quantum mechanics formalism is invoked and the time-dependent probability distributions are obtained with numerical calculations by using the variational method. A coarse-grained structure-based model of the two-state protein Tm CSP was simulated at a Cα level of resolution and the thermodynamics and kinetics were fully characterized. Analytical solutions from non-equilibrium conditions were obtained with the simulated double-well free energy potential and kinetic folding times were calculated. It was found that analytical folding time as a function of temperature agrees, quantitatively, with simulations and experiments from the literature of Tm CSP having the well-known 'U' shape of the Chevron Plots. The simple analytical model developed in this study has a potential to be used by theoreticians and experimentalists willing to explore, quantitatively, rates and the kinetic behavior of their system by informing the thermally activated barrier. The theory developed describes a stochastic process and, therefore, can be applied to a variety of biological as well as condensed-phase two-state systems.

  19. Permutation entropy of finite-length white-noise time series.

    PubMed

    Little, Douglas J; Kane, Deb M

    2016-08-01

    Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.

  20. Comparative study of nonlinear properties of EEG signals of normal persons and epileptic patients

    PubMed Central

    2009-01-01

    Background Investigation of the functioning of the brain in living systems has been a major effort amongst scientists and medical practitioners. Amongst the various disorder of the brain, epilepsy has drawn the most attention because this disorder can affect the quality of life of a person. In this paper we have reinvestigated the EEGs for normal and epileptic patients using surrogate analysis, probability distribution function and Hurst exponent. Results Using random shuffled surrogate analysis, we have obtained some of the nonlinear features that was obtained by Andrzejak et al. [Phys Rev E 2001, 64:061907], for the epileptic patients during seizure. Probability distribution function shows that the activity of an epileptic brain is nongaussian in nature. Hurst exponent has been shown to be useful to characterize a normal and an epileptic brain and it shows that the epileptic brain is long term anticorrelated whereas, the normal brain is more or less stochastic. Among all the techniques, used here, Hurst exponent is found very useful for characterization different cases. Conclusion In this article, differences in characteristics for normal subjects with eyes open and closed, epileptic subjects during seizure and seizure free intervals have been shown mainly using Hurst exponent. The H shows that the brain activity of a normal man is uncorrelated in nature whereas, epileptic brain activity shows long range anticorrelation. PMID:19619290

  1. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    DOE PAGES

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    2017-11-15

    In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less

  2. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    NASA Astrophysics Data System (ADS)

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    2017-11-01

    In Hezaveh et al. we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational-lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data, as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single variational parameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that the application of approximate Bayesian neural networks to astrophysical modeling problems can be a fast alternative to Monte Carlo Markov Chains, allowing orders of magnitude improvement in speed.

  3. The estimation of probable maximum precipitation: the case of Catalonia.

    PubMed

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  4. Multi-atlas label fusion using hybrid of discriminative and generative classifiers for segmentation of cardiac MR images.

    PubMed

    Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang

    2015-08-01

    Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.

  5. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less

  6. Distribution-Preserving Stratified Sampling for Learning Problems.

    PubMed

    Cervellera, Cristiano; Maccio, Danilo

    2017-06-09

    The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.

  7. Flood Frequency Curves - Use of information on the likelihood of extreme floods

    NASA Astrophysics Data System (ADS)

    Faber, B.

    2011-12-01

    Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.

  8. Quantitative assessment of direct and indirect landslide risk along transportation lines in southern India

    NASA Astrophysics Data System (ADS)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2010-06-01

    A quantitative approach for landslide risk assessment along transportation lines is presented and applied to a road and a railway alignment in the Nilgiri hills in southern India. The method allows estimating direct risk affecting the alignments, vehicles and people, and indirect risk resulting from the disruption of economic activities. The data required for the risk estimation were obtained from historical records. A total of 901 landslides were catalogued initiating from cut slopes along the railway and road alignment. The landslides were grouped into three magnitude classes based on the landslide type, volume, scar depth, run-out distance, etc and their probability of occurrence was obtained using frequency-volume distribution. Hazard, for a given return period, expressed as the number of landslides of a given magnitude class per kilometre of cut slopes, was obtained using Gumbel distribution and probability of landslide magnitude. In total 18 specific hazard scenarios were generated using the three magnitude classes and six return periods (1, 3, 5, 15, 25, and 50 years). The assessment of the vulnerability of the road and railway line was based on damage records whereas the vulnerability of different types of vehicles and people was subjectively assessed based on limited historic incidents. Direct specific loss for the alignments (railway line and road), vehicles (train, bus, lorry, car and motorbike) was expressed in monetary value (US), and direct specific loss of life of commuters was expressed in annual probability of death. Indirect specific loss (US) derived from the traffic interruption was evaluated considering alternative driving routes, and includes losses resulting from additional fuel consumption, additional travel cost, loss of income to the local business, and loss of revenue to the railway department. The results indicate that the total loss, including both direct and indirect loss, from 1 to 50 years return period, varies from US 90 840 to US 779 500 and the average annual total loss was estimated as US 35 000. The annual probability of a person most at risk travelling in a bus, lorry, car, motorbike and train is less than 10-4/annum in all the time periods considered. The detailed estimation of direct and indirect risk will facilitate developing landslide risk mitigation and management strategies for transportation lines in the study area.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Adam; Connaughton, Valerie; Briggs, Michael S.

    We present a method to estimate the jet opening angles of long duration gamma-ray bursts (GRBs) using the prompt gamma-ray energetics and an inversion of the Ghirlanda relation, which is a correlation between the time-integrated peak energy of the GRB prompt spectrum and the collimation-corrected energy in gamma-rays. The derived jet opening angles using this method and detailed assumptions match well with the corresponding inferred jet opening angles obtained when a break in the afterglow is observed. Furthermore, using a model of the predicted long GRB redshift probability distribution observable by the Fermi Gamma-ray Burst Monitor (GBM), we estimate themore » probability distributions for the jet opening angle and rest-frame energetics for a large sample of GBM GRBs for which the redshifts have not been observed. Previous studies have only used a handful of GRBs to estimate these properties due to the paucity of observed afterglow jet breaks, spectroscopic redshifts, and comprehensive prompt gamma-ray observations, and we potentially expand the number of GRBs that can be used in this analysis by more than an order of magnitude. In this analysis, we also present an inferred distribution of jet breaks which indicates that a large fraction of jet breaks are not observable with current instrumentation and observing strategies. We present simple parameterizations for the jet angle, energetics, and jet break distributions so that they may be used in future studies.« less

  10. A study of the application of power-spectral methods of generalized harmonic analysis to gust loads on airplanes

    NASA Technical Reports Server (NTRS)

    Press, Harry; Mazelsky, Bernard

    1954-01-01

    The applicability of some results from the theory of generalized harmonic analysis (or power-spectral analysis) to the analysis of gust loads on airplanes in continuous rough air is examined. The general relations for linear systems between power spectrums of a random input disturbance and an output response are used to relate the spectrum of airplane load in rough air to the spectrum of atmospheric gust velocity. The power spectrum of loads is shown to provide a measure of the load intensity in terms of the standard deviation (root mean square) of the load distribution for an airplane in flight through continuous rough air. For the case of a load output having a normal distribution, which appears from experimental evidence to apply to homogeneous rough air, the standard deviation is shown to describe the probability distribution of loads or the proportion of total time that the load has given values. Thus, for airplane in flight through homogeneous rough air, the probability distribution of loads may be determined from a power-spectral analysis. In order to illustrate the application of power-spectral analysis to gust-load analysis and to obtain an insight into the relations between loads and airplane gust-response characteristics, two selected series of calculations are presented. The results indicate that both methods of analysis yield results that are consistent to a first approximation.

  11. Laboratory demonstration of lightning strike pattern on different roof tops installed with Franklin Rods

    NASA Astrophysics Data System (ADS)

    Ullah, Irshad; Baharom, MNR; Ahmed, H.; Luqman, HM.; Zainal, Zainab

    2017-11-01

    Protection against lightning is always a challenging job for the researcher. The consequences due to lightning on different building shapes needs a comprehensive knowledge in order to provide the information to the common man. This paper is mainly concern with lightning pattern when it strikes on the building with different shape. The work is based on the practical experimental work in high voltage laboratory. Different shapes of the scaled structures have been selected in order to investigate the equal distribution of lightning voltage. The equal distribution of lightning voltage will provide the maximum probability of lightning strike on air terminal of the selected shapes. Building shapes have a very important role in lightning protection. The shapes of the roof tops have different geometry and the Franklin rod installation is also varies with changing the shape of the roof top. According to the ambient weather condition of Malaysia high voltage impulse is applied on the lightning rod installed on different geometrical shape. The equal distribution of high voltage impulse is obtained as the geometry of the scaled structure is identical and the air gap for all the tested object is kept the same. This equal distribution of the lightning voltage also proves that the probability of lightning strike is on the corner and the edges of the building structure.

  12. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  13. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  14. Comparative Risk Analysis of Two Culicoides-Borne Diseases in Horses: Equine Encephalosis More Likely to Enter France than African Horse Sickness.

    PubMed

    Faverjon, C; Leblond, A; Lecollinet, S; Bødker, R; de Koeijer, A A; Fischer, E A J

    2017-12-01

    African horse sickness (AHS) and equine encephalosis (EE) are Culicoides-borne viral diseases that could have the potential to spread across Europe if introduced, thus being potential threats for the European equine industry. Both share similar epidemiology, transmission patterns and geographical distribution. Using stochastic spatiotemporal models of virus entry, we assessed and compared the probabilities of both viruses entering France via two pathways: importation of live-infected animals or importation of infected vectors. Analyses were performed for three consecutive years (2010-2012). Seasonal and regional differences in virus entry probabilities were the same for both diseases. However, the probability of EE entry was much higher than the probability of AHS entry. Interestingly, the most likely entry route differed between AHS and EE: AHS has a higher probability to enter through an infected vector and EE has a higher probability to enter through an infectious host. Consequently, different effective protective measures were identified by 'what-if' scenarios for the two diseases. The implementation of vector protection on all animals (equine and bovine) coming from low-risk regions before their importation was the most effective in reducing the probability of AHS entry. On the other hand, the most significant reduction in the probability of EE entry was obtained by the implementation of quarantine before import for horses coming from both EU and non-EU countries. The developed models can be useful to implement risk-based surveillance. © 2016 Blackwell Verlag GmbH.

  15. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  16. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  17. Non-Gaussian elliptic-flow fluctuations in PbPb collisions at $$\\sqrt{\\smash[b]{s_{_\\text{NN}}}} = 5.02$$ TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirunyan, Albert M; et al.

    Event-by-event fluctuations in the elliptic-flow coefficientmore » $$v_2$$ are studied in PbPb collisions at $$\\sqrt{s_{_\\text{NN}}} = 5.02$$ TeV using the CMS detector at the CERN LHC. Elliptic-flow probability distributions $${p}(v_2)$$ for charged particles with transverse momentum 0.3$$< p_\\mathrm{T} <$$3.0 GeV and pseudorapidity $$| \\eta | <$$ 1.0 are determined for different collision centrality classes. The moments of the $${p}(v_2)$$ distributions are used to calculate the $$v_{2}$$ coefficients based on cumulant orders 2, 4, 6, and 8. A rank ordering of the higher-order cumulant results and nonzero standardized skewness values obtained for the $${p}(v_2)$$ distributions indicate non-Gaussian initial-state fluctuation behavior. Bessel-Gaussian and elliptic power fits to the flow distributions are studied to characterize the initial-state spatial anisotropy.« less

  18. Fission fragment mass and total kinetic energy distributions of spontaneously fissioning plutonium isotopes

    NASA Astrophysics Data System (ADS)

    Pomorski, K.; Nerlo-Pomorska, B.; Bartel, J.; Schmitt, C.

    2018-03-01

    The fission-fragment mass and total kinetic energy (TKE) distributions are evaluated in a quantum mechanical framework using elongation, mass asymmetry, neck degree of freedom as the relevant collective parameters in the Fourier shape parametrization recently developed by us. The potential energy surfaces (PES) are calculated within the macroscopic-microscopic model based on the Lublin-Strasbourg Drop (LSD), the Yukawa-folded (YF) single-particle potential and a monopole pairing force. The PES are presented and analysed in detail for even-even Plutonium isotopes with A = 236-246. They reveal deep asymmetric valleys. The fission-fragment mass and TKE distributions are obtained from the ground state of a collective Hamiltonian computed within the Born-Oppenheimer approximation, in the WKB approach by introducing a neck-dependent fission probability. The calculated mass and total kinetic energy distributions are found in good agreement with the data.

  19. Species abundance distribution and population dynamics in a two-community model of neutral ecology

    NASA Astrophysics Data System (ADS)

    Vallade, M.; Houchmandzadeh, B.

    2006-11-01

    Explicit formulas for the steady-state distribution of species in two interconnected communities of arbitrary sizes are derived in the framework of Hubbell’s neutral model of biodiversity. Migrations of seeds from both communities as well as mutations in both of them are taken into account. These results generalize those previously obtained for the “island-continent” model and they allow an analysis of the influence of the ratio of the sizes of the two communities on the dominance/diversity equilibrium. Exact expressions for species abundance distributions are deduced from a master equation for the joint probability distribution of species in the two communities. Moreover, an approximate self-consistent solution is derived. It corresponds to a generalization of previous results and it proves to be accurate over a broad range of parameters. The dynamical correlations between the abundances of a species in both communities are also discussed.

  20. On buffer overflow duration in a finite-capacity queueing system with multiple vacation policy

    NASA Astrophysics Data System (ADS)

    Kempa, Wojciech M.

    2017-12-01

    A finite-buffer queueing system with Poisson arrivals and generally distributed processing times, operating under multiple vacation policy, is considered. Each time when the system becomes empty, the service station takes successive independent and identically distributed vacation periods, until, at the completion epoch of one of them, at least one job waiting for service is detected in the buffer. Applying analytical approach based on the idea of embedded Markov chain, integral equations and linear algebra, the compact-form representation for the cumulative distribution function (CDF for short) of the first buffer overflow duration is found. Hence, the formula for the CDF of next such periods is obtained. Moreover, probability distributions of the number of job losses in successive buffer overflow periods are found. The considered queueing system can be efficienly applied in modelling energy saving mechanisms in wireless network communication.

  1. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  2. Stylized facts in internal rates of return on stock index and its derivative transactions

    NASA Astrophysics Data System (ADS)

    Pichl, Lukáš; Kaizoji, Taisei; Yamano, Takuya

    2007-08-01

    Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in normalized log returns, the probability distributions for such single asset encounters incorporate the time factor by means of the internal rate of return, defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting difference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be useful in deducing the type of investment strategy from trading revenues of small portfolio investors.

  3. Mathematical Model to estimate the wind power using four-parameter Burr distribution

    NASA Astrophysics Data System (ADS)

    Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu

    2018-03-01

    When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.

  4. Obtaining T1-T2 distribution functions from 1-dimensional T1 and T2 measurements: The pseudo 2-D relaxation model

    NASA Astrophysics Data System (ADS)

    Williamson, Nathan H.; Röding, Magnus; Galvosas, Petrik; Miklavcic, Stanley J.; Nydén, Magnus

    2016-08-01

    We present the pseudo 2-D relaxation model (P2DRM), a method to estimate multidimensional probability distributions of material parameters from independent 1-D measurements. We illustrate its use on 1-D T1 and T2 relaxation measurements of saturated rock and evaluate it on both simulated and experimental T1-T2 correlation measurement data sets. Results were in excellent agreement with the actual, known 2-D distribution in the case of the simulated data set. In both the simulated and experimental case, the functional relationships between T1 and T2 were in good agreement with the T1-T2 correlation maps from the 2-D inverse Laplace transform of the full 2-D data sets. When a 1-D CPMG experiment is combined with a rapid T1 measurement, the P2DRM provides a double-shot method for obtaining a T1-T2 relationship, with significantly decreased experimental time in comparison to the full T1-T2 correlation measurement.

  5. Probabilistic Tsunami Hazard Assessment along Nankai Trough (1) An assessment based on the information of the forthcoming earthquake that Earthquake Research Committee(2013) evaluated

    NASA Astrophysics Data System (ADS)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Morikawa, N.; Kawai, S.; Ohsumi, T.; Aoi, S.; Yamamoto, N.; Matsuyama, H.; Toyama, N.; Kito, T.; Murashima, Y.; Murata, Y.; Inoue, T.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.

    2015-12-01

    The Earthquake Research Committee(ERC)/HERP, Government of Japan (2013) revised their long-term evaluation of the forthcoming large earthquake along the Nankai Trough; the next earthquake is estimated M8 to 9 class, and the probability (P30) that the next earthquake will occur within the next 30 years (from Jan. 1, 2013) is 60% to 70%. In this study, we assess tsunami hazards (maximum coastal tsunami heights) in the near future, in terms of a probabilistic approach, from the next earthquake along Nankai Trough, on the basis of ERC(2013)'s report. The probabilistic tsunami hazard assessment that we applied is as follows; (1) Characterized earthquake fault models (CEFMs) are constructed on each of the 15 hypothetical source areas (HSA) that ERC(2013) showed. The characterization rule follows Toyama et al.(2015, JpGU). As results, we obtained total of 1441 CEFMs. (2) We calculate tsunamis due to CEFMs by solving nonlinear, finite-amplitude, long-wave equations with advection and bottom friction terms by finite-difference method. Run-up computation on land is included. (3) A time predictable model predicts the recurrent interval of the present seismic cycle is T=88.2 years (ERC,2013). We fix P30 = 67% by applying the renewal process based on BPT distribution with T and alpha=0.24 as its aperiodicity. (4) We divide the probability P30 into P30(i) for i-th subgroup consisting of the earthquakes occurring in each of 15 HSA by following a probability re-distribution concept (ERC,2014). Then each earthquake (CEFM) in i-th subgroup is assigned a probability P30(i)/N where N is the number of CEFMs in each sub-group. Note that such re-distribution concept of the probability is nothing but tentative because the present seismology cannot give deep knowledge enough to do it. Epistemic logic-tree approach may be required in future. (5) We synthesize a number of tsunami hazard curves at every evaluation points on coasts by integrating the information about 30 years occurrence probabilities P30(i) for all earthquakes (CEFMs) and calculated maximum coastal tsunami heights. In the synthesis, aleatory uncertainties relating to incompleteness of governing equations, CEFM modeling, bathymetry and topography data, etc, are modeled assuming a log-normal probabilistic distribution. Examples of tsunami hazard curves will be presented.

  6. Work probability distribution for a ferromagnet with long-ranged and short-ranged correlations

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, J. K.; Kirkpatrick, T. R.; Sengers, J. V.

    2018-04-01

    Work fluctuations and work probability distributions are fundamentally different in systems with short-ranged versus long-ranged correlations. Specifically, in systems with long-ranged correlations the work distribution is extraordinarily broad compared to systems with short-ranged correlations. This difference profoundly affects the possible applicability of fluctuation theorems like the Jarzynski fluctuation theorem. The Heisenberg ferromagnet, well below its Curie temperature, is a system with long-ranged correlations in very low magnetic fields due to the presence of Goldstone modes. As the magnetic field is increased the correlations gradually become short ranged. Hence, such a ferromagnet is an ideal system for elucidating the changes of the work probability distribution as one goes from a domain with long-ranged correlations to a domain with short-ranged correlations by tuning the magnetic field. A quantitative analysis of this crossover behavior of the work probability distribution and the associated fluctuations is presented.

  7. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  8. Nuclear risk analysis of the Ulysses mission

    NASA Astrophysics Data System (ADS)

    Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.

    1991-01-01

    The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.

  9. Adaptation in pronoun resolution: Evidence from Brazilian and European Portuguese.

    PubMed

    Fernandes, Eunice G; Luegi, Paula; Correa Soares, Eduardo; de la Fuente, Israel; Hemforth, Barbara

    2018-04-26

    Previous research accounting for pronoun resolution as a problem of probabilistic inference has not explored the phenomenon of adaptation, whereby the processor constantly tracks and adapts, rationally, to changes in a statistical environment. We investigate whether Brazilian (BP) and European Portuguese (EP) speakers adapt to variations in the probability of occurrence of ambiguous overt and null pronouns, in two experiments assessing resolution toward subject and object referents. For each variety (BP, EP), participants were faced with either the same number of null and overt pronouns (equal distribution), or with an environment with fewer overt (than null) pronouns (unequal distribution). We find that the preference for interpreting overt pronouns as referring back to an object referent (object-biased interpretation) is higher when there are fewer overt pronouns (i.e., in the unequal, relative to the equal distribution condition). This is especially the case for BP, a variety with higher prior frequency and smaller object-biased interpretation of overt pronouns, suggesting that participants adapted incrementally and integrated prior statistical knowledge with the knowledge obtained in the experiment. We hypothesize that comprehenders adapted rationally, with the goal of maintaining, across variations in pronoun probability, the likelihood of subject and object referents. Our findings unify insights from research in pronoun resolution and in adaptation, and add to previous studies in both topics: They provide evidence for the influence of pronoun probability in pronoun resolution, and for an adaptation process whereby the language processor not only tracks statistical information, but uses it to make interpretational inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Maximum entropy principal for transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilich, F.; Da Silva, R.

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less

  11. Integrated seismic stochastic inversion and multi-attributes to delineate reservoir distribution: Case study MZ fields, Central Sumatra Basin

    NASA Astrophysics Data System (ADS)

    Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.

    2017-07-01

    This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.

  12. Assignment of functional activations to probabilistic cytoarchitectonic areas revisited.

    PubMed

    Eickhoff, Simon B; Paus, Tomas; Caspers, Svenja; Grosbras, Marie-Helene; Evans, Alan C; Zilles, Karl; Amunts, Katrin

    2007-07-01

    Probabilistic cytoarchitectonic maps in standard reference space provide a powerful tool for the analysis of structure-function relationships in the human brain. While these microstructurally defined maps have already been successfully used in the analysis of somatosensory, motor or language functions, several conceptual issues in the analysis of structure-function relationships still demand further clarification. In this paper, we demonstrate the principle approaches for anatomical localisation of functional activations based on probabilistic cytoarchitectonic maps by exemplary analysis of an anterior parietal activation evoked by visual presentation of hand gestures. After consideration of the conceptual basis and implementation of volume or local maxima labelling, we comment on some potential interpretational difficulties, limitations and caveats that could be encountered. Extending and supplementing these methods, we then propose a supplementary approach for quantification of structure-function correspondences based on distribution analysis. This approach relates the cytoarchitectonic probabilities observed at a particular functionally defined location to the areal specific null distribution of probabilities across the whole brain (i.e., the full probability map). Importantly, this method avoids the need for a unique classification of voxels to a single cortical area and may increase the comparability between results obtained for different areas. Moreover, as distribution-based labelling quantifies the "central tendency" of an activation with respect to anatomical areas, it will, in combination with the established methods, allow an advanced characterisation of the anatomical substrates of functional activations. Finally, the advantages and disadvantages of the various methods are discussed, focussing on the question of which approach is most appropriate for a particular situation.

  13. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  14. An evaluation of procedures to estimate monthly precipitation probabilities

    NASA Astrophysics Data System (ADS)

    Legates, David R.

    1991-01-01

    Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.

  15. Estimation of synthetic flood design hydrographs using a distributed rainfall-runoff model coupled with a copula-based single storm rainfall generator

    NASA Astrophysics Data System (ADS)

    Candela, A.; Brigandì, G.; Aronica, G. T.

    2014-07-01

    In this paper a procedure to derive synthetic flood design hydrographs (SFDH) using a bivariate representation of rainfall forcing (rainfall duration and intensity) via copulas, which describes and models the correlation between two variables independently of the marginal laws involved, coupled with a distributed rainfall-runoff model, is presented. Rainfall-runoff modelling (R-R modelling) for estimating the hydrological response at the outlet of a catchment was performed by using a conceptual fully distributed procedure based on the Soil Conservation Service - Curve Number method as an excess rainfall model and on a distributed unit hydrograph with climatic dependencies for the flow routing. Travel time computation, based on the distributed unit hydrograph definition, was performed by implementing a procedure based on flow paths, determined from a digital elevation model (DEM) and roughness parameters obtained from distributed geographical information. In order to estimate the primary return period of the SFDH, which provides the probability of occurrence of a hydrograph flood, peaks and flow volumes obtained through R-R modelling were treated statistically using copulas. Finally, the shapes of hydrographs have been generated on the basis of historically significant flood events, via cluster analysis. An application of the procedure described above has been carried out and results presented for the case study of the Imera catchment in Sicily, Italy.

  16. Nuclear Forensics Analysis with Missing and Uncertain Data

    DOE PAGES

    Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent

    2015-10-05

    We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less

  17. Virtual detector theory for strong-field atomic ionization

    NASA Astrophysics Data System (ADS)

    Wang, Xu; Tian, Justin; Eberly, J. H.

    2018-04-01

    A virtual detector (VD) is an imaginary device located at a fixed position in space that extracts information from the wave packet passing through it. By recording the particle momentum and the corresponding probability current at each time, the VDs can accumulate and build the differential momentum distribution of the particle, in a way that resembles real experiments. A mathematical proof is given for the equivalence of the differential momentum distribution obtained by the VD method and by Fourier transforming the wave function. In addition to being a tool for reducing the computational load, VDs have also been found useful in interpreting the ultrafast strong-field ionization process, especially the controversial quantum tunneling process.

  18. q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.; Tian, Li

    2013-10-01

    We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1

  19. Probability weighted moments: Definition and relation to parameters of several distributions expressable in inverse form

    USGS Publications Warehouse

    Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.

    1979-01-01

    Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.

  20. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  1. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  2. DEM simulation of flow of dumbbells on a rough inclined plane

    NASA Astrophysics Data System (ADS)

    Mandal, Sandip; Khakhar, Devang

    2015-11-01

    The rheology of non-spherical granular materials such as food grains, sugar cubes, sand, pharmaceutical pills, among others, is not understood well. We study the flow of non-spherical dumbbells of different aspect ratios on a rough inclined plane by using soft sphere DEM simulations. The dumbbells are generated by fusing two spheres together and a linear spring dashpot model along with Coulombic friction is employed to calculate inter-particle forces. At steady state, a uni-directional shear flow is obtained which allows for a detailed study of the rheology. The effect of aspect ratio and inclination angle on mean velocity, volume fraction, shear rate, shear stress, pressure and viscosity profiles is examined. The effect of aspect ratio on probability distribution of angles, made by the major axes of the dumbbells with the flow direction, average angle and order parameter is analyzed. The dense flow rheology is well explained by Bagnold's law and the constitutive laws of JFP model. The dependencies of first and second normal stress differences on aspect ratio are studied. The probability distributions of translational and rotational velocity are analyzed.

  3. Singular Behavior of the Leading Lyapunov Exponent of a Product of Random {2 × 2} Matrices

    NASA Astrophysics Data System (ADS)

    Genovese, Giuseppe; Giacomin, Giambattista; Greenblatt, Rafael Leon

    2017-05-01

    We consider a certain infinite product of random {2 × 2} matrices appearing in the solution of some 1 and 1 + 1 dimensional disordered models in statistical mechanics, which depends on a parameter ɛ > 0 and on a real random variable with distribution {μ}. For a large class of {μ}, we prove the prediction by Derrida and Hilhorst (J Phys A 16:2641, 1983) that the Lyapunov exponent behaves like {C ɛ^{2 α}} in the limit {ɛ \\searrow 0}, where {α \\in (0,1)} and {C > 0} are determined by {μ}. Derrida and Hilhorst performed a two-scale analysis of the integral equation for the invariant distribution of the Markov chain associated to the matrix product and obtained a probability measure that is expected to be close to the invariant one for small {ɛ}. We introduce suitable norms and exploit contractivity properties to show that such a probability measure is indeed close to the invariant one in a sense that implies a suitable control of the Lyapunov exponent.

  4. Cross-stream migration of active particles

    NASA Astrophysics Data System (ADS)

    Uspal, William; Katuri, Jaideep; Simmchen, Juliane; Miguel-Lopez, Albert; Sanchez, Samuel

    For natural microswimmers, the interplay of swimming activity and external flow can promote robust directed motion, e.g. propulsion against (upstream rheotaxis) or perpendicular to the direction of flow. These effects are generally attributed to their complex body shapes and flagellar beat patterns. Here, using catalytic Janus particles as a model system, we report on a strong directional response that naturally emerges for spherical active particles in a channel flow. The particles align their propulsion axis to be perpendicular to both the direction of flow and the normal vector of a nearby bounding surface. We develop a deterministic theoretical model that captures this spontaneous transverse orientational order. We show how the directional response emerges from the interplay of external shear flow and swimmer/surface interactions (e.g., hydrodynamic interactions) that originate in swimming activity. Finally, adding the effect of thermal noise, we obtain probability distributions for the swimmer orientation that show good agreement with the experimental probability distributions. Our findings show that the qualitative response of microswimmers to flow is sensitive to the detailed interaction between individual microswimmers and bounding surfaces.

  5. Determination of the Changes of Drought Occurrence in Turkey Using Regional Climate Modeling

    NASA Astrophysics Data System (ADS)

    Sibel Saygili, Fatma; Tufan Turp, M.; Kurnaz, M. Levent

    2017-04-01

    As a consequence of the negative impacts of climate change, Turkey, being a country in the Mediterranean Basin, is under a serious risk of increased drought conditions. In this study, it is aimed to determine and compare the spatial distributions of climatological drought probabilities for Turkey. For this purpose, by making use of Regional Climate Model (RegCM4.4) of the Abdus Salam International Centre for Theoretical Physics (ICTP), the outputs of the MPI-ESM-MR global climate model of the Max Planck Institute for Meteorology are downscaled to 50km for Turkey. To make the future projection over Turkey for the period of 2071-2100 with respect to the reference period of 1986-2005, the worst case emission pathway RCP8.5 is used. The Palmer Drought Severity Index (PDSI) values are computed and classified in accordance with the seven classifications of National Oceanic and Atmospheric Administration (NOAA). Finally, the spatial distribution maps showing the changes in drought probabilities over Turkey are obtained in order to see the impact of climate change on Turkey's drought patterns.

  6. Normal and abnormal tissue identification system and method for medical images such as digital mammograms

    NASA Technical Reports Server (NTRS)

    Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)

    2001-01-01

    A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.

  7. Long-time behavior of material-surface curvature in isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Girimaji, S. S.

    1992-01-01

    The behavior at large times of the curvature of material elements in turbulence is investigated using Lagrangian velocity-gradient time series obtained from direct numerical simulations of isotropic turbulence. The main objectives are: to study the asymptotic behavior of the pdf curvature as a function of initial curvature and shape; and to establish whether the curvature of an initially plane material element goes to a stationary probability distribution. The evidence available in the literature about the asymptotic curvature-pdf of initially flat surfaces is ambiguous, and the conjecture is that it is quasi-stationary. In this work several material-element ensembles of different initial curvatures and shapes are studied. It is found that, at long times the moments of the logarithm of curvature are independent of the initial pdf of curvature. This, it is argued, supports the view that the curvature attains a stationary distribution at long times. It is also shown that, irrespective of initial shape or curvature, the shape of any material element at long times is cylindrical with a high probability.

  8. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  9. Critical behavior in earthquake energy dissipation

    NASA Astrophysics Data System (ADS)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.

  10. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    NASA Astrophysics Data System (ADS)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  11. Modelling the distributions and spatial coincidence of bluetongue vectors Culicoides imicola and the Culicoides obsoletus group throughout the Iberian peninsula.

    PubMed

    Calvete, C; Estrada, R; Miranda, M A; Borrás, D; Calvo, J H; Lucientes, J

    2008-06-01

    Data obtained by a Spanish national surveillance programme in 2005 were used to develop climatic models for predictions of the distribution of the bluetongue virus (BTV) vectors Culicoides imicola Kieffer (Diptera: Ceratopogonidae) and the Culicoides obsoletus group Meigen throughout the Iberian peninsula. Models were generated using logistic regression to predict the probability of species occurrence at an 8-km spatial resolution. Predictor variables included the annual mean values and seasonalities of a remotely sensed normalized difference vegetation index (NDVI), a sun index, interpolated precipitation and temperature. Using an information-theoretic paradigm based on Akaike's criterion, a set of best models accounting for 95% of model selection certainty were selected and used to generate an average predictive model for each vector. The predictive performances (i.e. the discrimination capacity and calibration) of the average models were evaluated by both internal and external validation. External validation was achieved by comparing average model predictions with surveillance programme data obtained in 2004 and 2006. The discriminatory capacity of both models was found to be reasonably high. The estimated areas under the receiver operating characteristic (ROC) curve (AUC) were 0.78 and 0.70 for the C. imicola and C. obsoletus group models, respectively, in external validation, and 0.81 and 0.75, respectively, in internal validation. The predictions of both models were in close agreement with the observed distribution patterns of both vectors. Both models, however, showed a systematic bias in their predicted probability of occurrence: observed occurrence was systematically overestimated for C. imicola and underestimated for the C. obsoletus group. Average models were used to determine the areas of spatial coincidence of the two vectors. Although their spatial distributions were highly complementary, areas of spatial coincidence were identified, mainly in Portugal and in the southwest of peninsular Spain. In a hypothetical scenario in which both Culicoides members had similar vectorial capacity for a BTV strain, these areas should be considered of special epidemiological concern because any epizootic event could be intensified by consecutive vector activity developed for both species during the year; consequently, the probability of BTV spreading to remaining areas occupied by both vectors might also be higher.

  12. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    NASA Astrophysics Data System (ADS)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  13. Generalized Arcsine Laws for Fractional Brownian Motion.

    PubMed

    Sadhu, Tridib; Delorme, Mathieu; Wiese, Kay Jörg

    2018-01-26

    The three arcsine laws for Brownian motion are a cornerstone of extreme-value statistics. For a Brownian B_{t} starting from the origin, and evolving during time T, one considers the following three observables: (i) the duration t_{+} the process is positive, (ii) the time t_{last} the process last visits the origin, and (iii) the time t_{max} when it achieves its maximum (or minimum). All three observables have the same cumulative probability distribution expressed as an arcsine function, thus the name arcsine laws. We show how these laws change for fractional Brownian motion X_{t}, a non-Markovian Gaussian process indexed by the Hurst exponent H. It generalizes standard Brownian motion (i.e., H=1/2). We obtain the three probabilities using a perturbative expansion in ϵ=H-1/2. While all three probabilities are different, this distinction can only be made at second order in ϵ. Our results are confirmed to high precision by extensive numerical simulations.

  14. The probability of misassociation between neighboring targets

    NASA Astrophysics Data System (ADS)

    Areta, Javier A.; Bar-Shalom, Yaakov; Rothrock, Ronald

    2008-04-01

    This paper presents procedures to calculate the probability that the measurement originating from an extraneous target will be (mis)associated with a target of interest for the cases of Nearest Neighbor and Global association. It is shown that these misassociation probabilities depend, under certain assumptions, on a particular - covariance weighted - norm of the difference between the targets' predicted measurements. For the Nearest Neighbor association, the exact solution, obtained for the case of equal innovation covariances, is based on a noncentral chi-square distribution. An approximate solution is also presented for the case of unequal innovation covariances. For the Global case an approximation is presented for the case of "similar" innovation covariances. In the general case of unequal innovation covariances where this approximation fails, an exact method based on the inversion of the characteristic function is presented. The theoretical results, confirmed by Monte Carlo simulations, quantify the benefit of Global vs. Nearest Neighbor association. These results are applied to problems of single sensor as well as centralized fusion architecture multiple sensor tracking.

  15. Investigation of the relation between the return periods of major drought characteristics using copula functions

    NASA Astrophysics Data System (ADS)

    Hüsami Afşar, Mehdi; Unal Şorman, Ali; Tugrul Yilmaz, Mustafa

    2016-04-01

    Different drought characteristics (e.g. duration, average severity, and average areal extent) often have monotonic relation that increased magnitude of one often follows a similar increase in the magnitude of the other drought characteristic. Hence it is viable to establish a relationship between different drought characteristics with the goal of predicting one using other ones. Copula functions that relate different variables using their joint and conditional cumulative probability distributions are often used to statistically model the drought characteristics. In this study bivariate and trivariate joint probabilities of these characteristics are obtained over Ankara (Turkey) between 1960 and 2013. Copula-based return period estimation of drought characteristics of duration, average severity, and average areal extent show joint probabilities of these characteristics can be satisfactorily achieved. Among different copula families investigated in this study, elliptical family (i.e. including normal and t-student copula functions) resulted in the lowest root mean square error. "This study was supported by TUBITAK fund #114Y676)."

  16. Multi-beam transmitter geometries for free-space optical communications

    NASA Astrophysics Data System (ADS)

    Tellez, Jason A.; Schmidt, Jason D.

    2010-02-01

    Free-space optical communications systems provide the opportunity to take advantage of higher data transfer rates and lower probability of intercept compared to radio-frequency communications. However, propagation through atmospheric turbulence, such as for airborne laser communication over long paths, results in intensity variations at the receiver and a corresponding degradation in bit error rate (BER) performance. Previous literature has shown that two transmitters, when separated sufficiently, can effectively average out the intensity varying effects of the atmospheric turbulence at the receiver. This research explores the impacts of adding more transmitters and the marginal reduction in the probability of signal fades while minimizing the overall transmitter footprint, an important design factor when considering an airborne communications system. Analytical results for the cumulative distribution function are obtained for tilt-only results, while wave-optics simulations are used to simulate the effects of scintillation. These models show that the probability of signal fade is reduced as the number of transmitters is increased.

  17. Probabilistic reasoning in data analysis.

    PubMed

    Sirovich, Lawrence

    2011-09-20

    This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.

  18. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  19. Work probability distribution and tossing a biased coin

    NASA Astrophysics Data System (ADS)

    Saha, Arnab; Bhattacharjee, Jayanta K.; Chakraborty, Sagar

    2011-01-01

    We show that the rare events present in dissipated work that enters Jarzynski equality, when mapped appropriately to the phenomenon of large deviations found in a biased coin toss, are enough to yield a quantitative work probability distribution for the Jarzynski equality. This allows us to propose a recipe for constructing work probability distribution independent of the details of any relevant system. The underlying framework, developed herein, is expected to be of use in modeling other physical phenomena where rare events play an important role.

  20. A Tensile Specimen of Tailor Rolled Blanks with Equal Probability in Yield and Its Mechanical Behavior Analysis

    PubMed Central

    Zhang, Sijia; Liu, Xianghua; Liu, Lizhong

    2018-01-01

    In this paper, the microstructure and mechanical properties that distribute regulation along the rolling direction of tailor rolled blanks (TRB) were investigated. A tensile specimen with equal probability in yield (EYS) was first designed considering variation both in thickness and in material strength. The uniaxial tension test was carried out with a digital image correlation method to analyze the mechanical behaviors. The results showed that the strain distribution of EYS was homogeneous. From the results, it can be known that a new design philosophy for a TRB tensile specimen is reasonable and EYS is suitable to characterize the mechanical behavior of TRB. The true stress-strain curves of metal in different cross sections of TRB were calculated. On the basis of the true stress-strain curves, a material model of TRB was constructed and then implemented into finite element simulations of TRB uniaxial tensile tests. The strain distribution of numerical and experimental results was similar and the error between the elongation of the specimen after fracture obtained by experiment and FE ranged from 9.51% to 13.06%. Therefore, the simulation results match well with the experimental results and the material model has high accuracy and as well as practicability. PMID:29710772

Top